ksvanhorn.com
Home
Bayes Home
Jaynes Errata
Articles
Books
Software
Contact
next up previous

Subsections



Chapter 18: The $A_{p}$ distribution and rule of succession

  • p. 563, eqn. (18.22): $P(N_p \mid X)$ should be $P(N_n \mid X)$.

  • p. 570, eqn. (18.39): $P(n_1 \cdots n_k \mid X)$ should be $P(n_1 \cdots n_K \mid X)$ ($K$ instead of $k$).

  • p. 576, second paragraph, end of line six: ``existance'' should be ``existence.''

  • p. 576, second paragraph: ``As we saw earlier in this chapter, even the $+1$ and $+2$ in Laplace's formula turn up when the `frequentist' refines his methods...'' Actually, this is discussed later in the chapter -- see eqn. (18.68).

  • p. 577, text preceding equation (18.58): The reference to equation (18.55) should probably be (18.56).

  • p. 579, last paragraph of section 18.15, line six: ``thoery'' should be ``theory.''

  • p. 580, first line: $(n/M)$ should be $(n/N)$.

  • p. 580, third line after (18.69): ``Pearson and Clopper'' should be ``Clopper and Pearson.''

  • p. 581, third line from bottom: $M_{\delta}$ should be $M\Delta$.

  • p. 582, eqn. (18.73): $F^1$ should be $f^1$; also, in the second line, first factor, $(n+1)/(N+2)$ should be $(n+1)/(N+3)$.

  • p. 582, first three lines after (18.73): $F^1$ should be $f^1$ in each instance.

  • p. 582, eqn. (18.76): $M_n$ should be $M_m$.

  • p. 583, eqn. (18.78), second line: $M_p$ should be $M p$.

  • p. 583, eqn. (18.79): $\overline{M^2}$ should be $\overline{m^2}$ and the factor $(n - (n+1)/(N+2))$ should be $(1 - (n+1)/(N+2))$.

  • p. 583, eqns. (18.80), (18.81), and (18.82): $M_p$ should be $M p$.

  • p. 584, second full paragraph, lines 3 and 6: $F^1$ should be $f^1$.

  • p. 584, second full paragraph, end of line 12, and also line 13: ``uncertainity'' should be ``uncertainty.''

  • p. 586, eqn. (18.87): ${{N-m} \choose m}$ should be ${{N-n} \choose m}$.

  • p. 587, first line after (18.93): ``If we substitute (18.93)...'' Should this be (18.91)?


Miscellaneous Comments

  • p. 554, eqn. (18.1): This definition cannot hold true for arbitrary propositions $E$; for example, what if $E$ implies $A$? This kind of problem occurs throughout the chapter. I don't think you can really discuss the $A_p$ distribution properly without explicitly introducing the notion of a sample space and organizing one's information about the sample space as a graphical model in which $A$ has a single parent variable $\theta$, with $A_p$ defined as the proposition $\theta = p$. For those unfamiliar with graphical models / Bayesian networks, I recommend the following book:

  • p. 555, eqn. (18.3): This appears to be at odds with Chapter 12, which advocates the improper Haldane prior (proportional to $p^{-1}(1-p)^{-1}$) as describing the ``completely ignorant population.'' However, that chapter also argues that the Haldane prior applies when one does not even know whether or not both outcomes are possible...and that the uniform prior applies if one does know that both outcomes are possible. (I argue in my comments on Chapter 12 that the uniform prior is the correct ignorance prior in general anyway.)

  • p. 555, eqn. (18.7): For those who may be confused by this equation, the integrand $p (A_p\mid E)$ means $p \cdot (A_p\mid E)$, not the probability density of $A_p$ given $E$.

  • p. 556, third line after (18.9): ``But suppose that, for a given $E_b$, (18.8) holds independently of what $E_a$ might be; call this `strong irrelevance.' '' If (18.8) holds for any proposition $E_a$, then in particular it holds for the proposition $E_a \equiv \neg E_b \vee A$; then from (18.8) we have

    \begin{displaymath}
P(A\mid \neg E_b \vee A) =
P(A \mid E_a) =
P(A \mid E_a \wedge E_b) =
P(A \mid E_b \wedge A) = 1;
\end{displaymath}

    then since $\neg E_b$ implies $\neg E_b \vee A$, we also have $P(A\mid \neg E_b) = 1$. Thus, this definition of ``strong irrelevance'' actually ensures that $E_b$ is highly relevant to $A$. As before, this discussion really needs to be rewritten in terms of graphical models to get it right, in particular making use of the notion of $d$-separation.

  • p. 583, eqn. (18.78): To get the second line from the first, use these identities:
    • $E[m^2] = E[m]^2 + V[m]$.
    • For the binomial distribution, $E[m] = Mp$ and $V[m] = Mp(1-p)$.

  • p. 586, third full paragraph, first sentence: ``An important theorem of def Finetti (1937) asserts that the converse is also true:...'' What Jaynes says here is not true for finite $N$; it only holds in the limit as $N
\rightarrow \infty$. As a counterexample, consider draws without replacement from an urn containing $N=b+w$ balls, with $b$ black and $w$ white. The sequence of draws $x_1,\ldots,x_N$ is exchangeable, but $P(x_1,\ldots,x_N \mid N)$ cannot be generated by any $A_p$ distribution. To see this, note that once we know the values of $x_1,\ldots,x_{N-1}$ we also know the value of $x_N$ with certainty, because we know the total number of balls of each color in the urn.

  • p. 586, third full paragraph, second sentence: Even in the limit $N
\rightarrow \infty$, for this statement to be true in general we must allow $g(p)$ to be a generalized function--that is, we must be able to assign nonzero probability mass to single points using delta functions.

  • p. 587, second sentence after (18.89): For this sentence to be true requires that matrix $A$ be nonsingular, where $a_{n,k} \equiv \alpha_k(N,n)$. To see that $A$ is in fact nonsingular, note that $a_{n,k} = 0$ for $k<n$ and $a_{k,k} = 1$. Then for arbitrary $x$ one can solve for $\beta$ in $A\beta =
x$ by backsubstitution.


next up previous