Sample Efficient Reinforcement learning via Language Abstractions
Fabian Paischer (Ph.D. Student)
Deep Reinforcement learning has gained plenty of attention recently through mastering highly complex games, such as StarCraft II. Many difficult problems arise in such games, for example dealing with partial observability, or continuous state and action spaces. To cope with such problems, current algorithms require many interaction steps with the environment which renders them extremely sample inefficient. Human language is an efficient tool to pass on information and experiences from one generation to the next. Thus, language is inherently well-suited for constructing abstract concepts. Therefore, language can be used to compress information about the past, or provide high-level instructions for planning to improve sample efficiency of existing Reinforcement Learning algorithms. Naturally, incorporation of language also results in enhanced explainability of diverse algorithms. We aim to integrate techniques prevalently used in Natural Language Processing to enhance sample efficiency and explainability in the setting of Reinforcement Learning.
|Primary Host:||Sepp Hochreiter (Johannes Kepler University Linz)|
|Exchange Host:||Marc Deisenroth (University College London)|
|PhD Duration:||01 October 2021 - Ongoing|
|Exchange Duration:||01 February 2023 - 01 August 2023 - Ongoing|