Bayesian Continual Learning
Aaron Klein (PostDoc)
In many real world scenarios, an agent has to face a dynamic environment, where data is not drawn i.i.d from a stationary distribution, but rather changes over time. Continual learning represents a general framework for such scenarios that, in contrast to the standard case where a ﬁxed training and test set is provided, continuously trains a machine learning model across consecutively arriving tasks without storing all past data from previous tasks. Unfortunately, standard machine learning models, such as for example neural networks, tend to forget previous task quickly, so-called catastrophic forgetting (Goodfellow et al., 2014), while training on the current task. Ideally, a continual learning approach would not only avoid catastrophic forgetting, but also use the information gathered from previous tasks to achieve better performance on new tasks (forward transfer) and, vice versa, improves on older tasks after training on new tasks (backward transfer). Recent work on continual learning tries to combat catastrophic forgetting by either regularizing the loss function to preserve network parameters (Kirkpatrick et al., 2016; Nguyen et al., 2018), by rehearsal (Shin et al., 2017) of previous tasks or by adapting the network architecture (Rusu et al., 2016). In this project we aim to develop new methods for continual learning of Bayesian neural networks. Compared to neural networks trained with a maximum likelihood approach, Bayesian neural networks use a Bayesian treatment for the network parameters which allows to quantify the uncertainty of the network’s predictions. This is essential for many sequential decision making problems, such as reinforcement learning or Bayesian optimization, where reliable uncertainty estimates are required to trade-oﬀ exploration and exploitation.
|Primary Host:||Cédric Archambeau (Amazon Research)|
|Exchange Host:||Richard E. Turner (University of Cambridge)|
|PostDoc Duration:||01 July 2019 - 30 June 2021|
|Exchange Duration:||01 June 2020 - 30 June 2021|