ksvanhorn.com
Home
Bayes Home
Jaynes Errata
Articles
Books
Software
Contact
next up previous

Subsections



Chapter 14: Simple applications of decision theory

  • p. 428, equation (14.9): This isn't quite stated correctly. We cannot have $p(D \mid V) = p(D \mid V, Y)$ for all propositions $Y \neq D$; consider, for example, defining $Y = D \wedge V$. A correct statement might require that $D$ be a proposition asserting particular values for model variables $d_{i}$, $1 \leq i \leq n$; that $V$ be a proposition asserting a particular value for a model variable $v$, distinct from the variables $d_{i}$; and that $Y$ be a proposition asserting a particular value for another model variable $y$, distinct from $v$ and the variables $d_{i}$.

  • p. 429, Theorem: This isn't stated quite correctly. The fact that $D$ is a possible decision, given $V$, does not imply that $p(V \mid D) \neq 0$. Is $p(V \mid D) \neq 0$ meant as an extra condition? Furthermore, in equation (14.14), given that $P(V \mid D) \neq 0$, the $\Leftarrow$ implication holds, but the $\Rightarrow$ implication holds only if $p(Y \mid V) \neq 0$.

  • p. 433, equation (14.32), first line: `` $(V S_{1} \mid X)$'' should be `` $p(V S_{1} \mid X)$''.

  • p. 440, first full paragraph: ``Woodword'' should be ``Woodward.''

  • p. 444, first line: the reference to (11.46) should be (11.48).

  • p. 447, equation (14.79): I believe the variable r on the right-hand side of the equation should be omitted, to give a numerator of $\exp(-n(\lambda +
\mu))$.

  • p. 447, equation (14.82): `` $\langle m_{1}\rangle_{1}$'' should be `` $\langle m_{1}\rangle$''.

  • p. 448, equation (14.83): `` $\langle n_{1}\rangle_{1} / \langle m_{1}\rangle$'' should be `` $\langle n_{1}\rangle / \langle m_{1}\rangle^2$''.

Commentary on 14.7.3: Solution for Stage 4

Jaynes states,

...this new knowledge [a specific order for 40 green widgets], which makes the problem so hard for our common sense, causes no difficulty at all in the mathematics. The previous equations still apply, with the sole difference that the stock $S_{3}$ of green widgets is reduced from 50 to 10.

The above seems intuitively plausible, but let's follow Jaynes's advice to always carefully derive results from the basic laws of probability theory, rather than making intuitive leaps. The new information for Stage 4 is a proposition instead of an expected value for the prior distribution to satisfy; the proper procedure then is to take the prior distribution of Stage 3 and condition on $w_{40} \geq 1$ to obtain the Stage 4 distribution. Let's do that now.

Recall that $u_{r}$ is the number of orders for $r$ red widgets, $v_{y}$ is the number of orders for $y$ yellow widgets, and $w_{g}$ is the number of orders for $g$ green widgets. From (14.72), the Stage 3 prior distribution factors into independent distributions for the sets of variables $u_{r}$, $v_{y}$, and $w_{g}$:

\begin{displaymath}
p(u_{1},\ldots; v_{1},\ldots; w_{1},\ldots) =
p_{1}(u_{1},\ldots)
p_{2}(v_{1},\ldots)
p_{3}(w_{1},\ldots).
\end{displaymath}

From (14.69), (14.70), and (14.71), we also see that $p_{3}$ factors into independent distributions for each variable $w_{g}$:

\begin{displaymath}
P(w_{g} = w) = C_{0} \exp(- (\lambda_{3} g +
\mu_{3})w)\mbox{, where $C_{0}$ is a normalization constant.}
\end{displaymath}

Thus, conditioning on $w_{40} \geq 1$ affects only the distribution for $w_{40}$. Furthermore, the distribution for $w_{40}$ is an exponential distribution, and as such has the easily verified general property that for any $n \geq 0$,

\begin{displaymath}
P(w_{40} = n + w \mid w_{40} \geq n) = P(w_{40} = w).
\end{displaymath}

Using $n = 1$, Jaynes's assertion follows directly.

next up previous