Thomas Schmied
PhD
Johannes Kepler University Linz (JKU)
Continual Reinforcement Learning with associative memories

The current generation of Deep Reinforcement learning systems is primarily designed to solve one particular task in a single stationary environment. However, the real world is non-stationary and dynamic by nature. For Reinforcement learning agents to be useful under these circumstances, they need to have the ability to efficiently learn a variety of tasks over an extended period of time in increasingly complex environments. To achieve this, Reinforcement learning agents must quickly adapt to changing environments, tasks, or distributions by leveraging the power of memory and context. In this project, we aim to develop novel, continual Reinforcement learning architectures that integrate dense associated memories, such as modern Hopfield networks, involved credit-assignment mechanisms, as well as recent advances in large-scale RL architectures via sequence modelling.

Track:
Industry Track
PhD Duration:
February 1st, 2022 - January 31st, 2025
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.