Home
Bayes Home
Jaynes Errata
Articles
Books
Software
Contact

Subsections
Chapter 15: Paradoxes of probability theory
 p. 460, equation (15.18): Should be ``
'' or, equivalently, ``
''.
 p. 467, equation (15.42): insert a minus sign in front of the argument to
the exponential function.
 p. 473, second full paragraph: ``the righthand sides of (15.58) and
(15.61)'' should be ``the lefthand sides...''.
 p. 475, equation (15.67): insert a minus sign before the argument to the
exponential function.
 p. 481, equation (15.89):
in the exponential
function should be
.
 p. 481, equations (15.87) and (15.88): The factor
should be
.
Commentary: The Marginalization Paradox
After spending many hours going over Jaynes's treatment of the Marginalization
Paradox, I've come to the conclusion that he got this one wrong: (15.72) is
wrong, and (15.70) is the correct formula also for . Sections 15.8 and
15.9 are a puzzling anomaly, as Jaynes unaccountably breaks a number of
the rules he emphasizes so often elsewhere in the book, and this leads him
into error. I've written up my conclusions in a separate note
(postscript,
PDF). In summary, here is what I've
shown:
 The paradox arises from an unnoticed divergent integral that shows up
when one tries to go from
to
; this step is invalid because it requires multiplying by
and then integrating out , but it seems to have escaped notice
that the improper prior over results in also being improper.
 In the specific case of the changepoint problem, if
one derives
for the proper prior
, then takes the limit as
(going to the
limiting improper prior
), one obtains 's
answer (15.70), and not 's answer (15.72).
 The issue of nonuniform convergence plays an important role in this
problem, and as
converges to (15.72), the distribution
retains significant probability mass in the (eversmaller) region
where
is far from convergence.
It's worth noting, however, that my resolution of the paradox was obtained
simply by following the practices Jaynes advocates in PTLOS.
Is it a disaster for Bayesian analysis if we have to abandon the use of
improper priors? I don't think so. As Jaynes points out, the really
important use of improper priors is as a zeropoint for constructing
maximumentropy priors. Furthermore, he shows in one problem after another
that even in situations where one might be tempted to say that we are totally
ignorant about some parameter, simple commonsense reasoning and application
of physical constraints allow us to create a defensible proper prior. There
are, in fact, some pretty good reasons (beyond the MP) to stick to proper
priors:
 A lot of interesting problems can't be solved analytically, requiring
instead the use numerical methods that generally won't work with improper
priors. In particular, the use of Markov Chain Monte Carlo
(e.g.,
BUGS)
has become increasingly popular over the last decade, and this requires proper
priors.
 Model comparison (see Chapter 20)  one of the more interesting
applications of Bayesian methods  requires proper priors.
Philip Dawid informs me that in 1996 he, Stone, and Zidek also wrote a
response to Chapter 15, based on the version of PTLOS available on the
Internet at that time; you can find it
here
as report
172 for 1996.
Some final technical comments:
 One obtains (15.87) via the change of parameters
.
 On p. 482 Jaynes talks about applying (15.89) to obtain a posterior over
conditional on . That is, (15.89) is to be used as a likelihood.
Unfortunately, the proportionality in (15.89) retains only factors dependent
on , when instead it needs to retain those factors dependent on or
(in particular, a factor of
is missing. (This
comment comes from DSZ's response, mentioned above.)
