In this section, we formulate the same computation in terms of variational Bayesian inference and neural networks to demonstrate their correspondence. We first derive the form of a variational free energy cost function under a specific generative model, a Markov decision process.^{1} We present the derivations carefully, with a focus on the form of the ensuing Bayesian belief updating. The functional form of this update will reemerge later, when reverse engineering the cost functions implicit in neural networks. These correspondences are depicted in Figure 1 and Table 1. This section starts with a description of Markov decision processes as a general kind of generative model and then considers the minimization of variational free energy under these models.

Figure 1:

Table 1:

Neural Network Formation . | Variational Bayes Formation . | |
---|---|---|

Neural activity | $xtj$$\u27fa$$st1(j)$ | State posterior |

Sensory inputs | $ot$$\u27fa$$ot$ | Observations |

Synaptic strengths | $Wj1$$\u27fa$$sig-1A11(\xb7,j)$ | |

$W^j1\u2261sig(Wj1)$$\u27fa$$A11(\xb7,j)$ | Parameter posterior | |

Perturbation term | $\varphi j1$$\u27fa$$lnD1(j)$ | State prior |

Threshold | $hj1$$\u27fa$$ln1\u2192-A11(\xb7,j)\xb71\u2192+lnD1(j)$ | |

Initial synaptic strengths | $\lambda j1\u2299W^j1init$$\u27fa$$a11(\xb7,j)$ | Parameter prior |

Neural Network Formation . | Variational Bayes Formation . | |
---|---|---|

Neural activity | $xtj$$\u27fa$$st1(j)$ | State posterior |

Sensory inputs | $ot$$\u27fa$$ot$ | Observations |

Synaptic strengths | $Wj1$$\u27fa$$sig-1A11(\xb7,j)$ | |

$W^j1\u2261sig(Wj1)$$\u27fa$$A11(\xb7,j)$ | Parameter posterior | |

Perturbation term | $\varphi j1$$\u27fa$$lnD1(j)$ | State prior |

Threshold | $hj1$$\u27fa$$ln1\u2192-A11(\xb7,j)\xb71\u2192+lnD1(j)$ | |

Initial synaptic strengths | $\lambda j1\u2299W^j1init$$\u27fa$$a11(\xb7,j)$ | Parameter prior |

This site uses cookies. By continuing to use our website, you are agreeing to our privacy policy.