Learning in the Brain beyond Backprop
Inspired by the stunning successes of machine learning over the last
decade which utilize deep artificial neural networks trained by the
backpropagation of error algorithm (backprop), much recent work in
understanding learning in the brain has focused on how to compute or
approximate backprop in neural circuitry. In this talk, however, building
on recent work, I argue that there exist learning algorithms which are
more biologically plausible and easier to implement in the brain and are
also more effective at learning than backprop. I show an example in the
context of predictive coding networks, and also propose a general theory
of a family of credit assignment algorithms for deep neural networks, of
which backprop is just a single example.