Active inference as a computational framework for modeling empirical data
The aim of this talk is to introduce the active inference framework and
how it can be applied in empirical research within neuroscience and
psychiatry. Active inference is an influential theory of perception,
learning, and decision-making based on approximate Bayesian inference.
While taking many possible forms, it is most often implemented as a
partially observable Markov decision process (POMDP) in discrete time with
discrete state and outcome spaces. Perception and learning in these models
is accomplished through message passing schemes and minimization of
variational (or marginal) free energy. Selection of action sequences
(policies) is accomplished by minimizing the expected free energy of
future observations - which motivates a trade-off between
information-seeking and reward-seeking. I will provide a walkthrough of
each of these elements of active inference, and how the framework can be
used to simulate neural data for use in fMRI/EEG studies. I will also
review previous applications of this framework within empirical studies of
behavior in computational psychiatry.
Recording: https://youtu.be/mVdVDkfkPmA