Stephan Bongers

Causal Inference in Reinforcement Learning

Stephan Bongers (PostDoc)

Recent advances in reinforcement learning have led to automated decision-making systems that perform remarkably well on various tasks. Despite these remarkable achievements, applying reinforcement learning in practice often remains challenging. One major challenge is that one often needs knowledge of the environment and must learn from scratch through numerous interactions. Causal inference, on the other hand, provides a set of tools and principles that allow to combine data and structural invariances about the environment. In particular, it can factor knowledge into independent modules, or mechanisms that are invariant, across different actions. In principle, such factorized representations allow learning to be more efficient and allow for better generalization since, after an action, one only has to adapt specific modules in the representation, while others can be re-used. In this project, I explore how causal inference can be beneficial for RL tasks. Moreover, I am broadly interested in statistical and causal inference in adaptive and sequential settings.

Primary Advisor: Frans A. Oliehoek (Delft University of Technology)
Industry Advisor: Onno Zoeter (Booking.com)
PostDoc Duration: 01 November 2022 - 01 November 2024