Model-based reinforcement learning and planning under uncertainty
Maris Galesloot (Ph.D. Student)
Reinforcement learning (RL) has recently seen increasing use due to its ability to scale decision-making under uncertainty to high-dimensional environments. To train RL agents, a simulator or part of a model specification is often used. However, the knowledge, in the form of explicit modelling of these simulators often remains unused, as most algorithms are completely model-free. Moreover, RL tends to suffer from a high sample-complexity and relies on sufficient exploration of the environments, which can induce unsafe behaviour during training and execution. Lastly, to handle the problems of partial observability in the environments, it remains a challenge how agents should represent memory sufficiently . This PhD project consists of tackling challenges related to learning and planning in partly specified and/or partially observable environments. The project draws inspiration from both recent advances in deep RL and model-based approaches. We will develop algorithms that are both scalable and robust to enable real-world applications. In particular, we take inspiration from planning problems in robotics. Therefore, the aim is to combine the robustness of model-based approaches with the scalability of data-driven learning techniques.
Primary Host: | Nils Jansen (Ruhr-University Bochum & Radboud University) |
Exchange Host: | Nick Hawes (University of Oxford) |
PhD Duration: | 01 April 2023 - 01 April 2027 |
Exchange Duration: | 01 April 2025 - 01 October 2025 - Ongoing |