Fabian Paischer
PhD
Johannes Kepler University Linz (JKU)
Sample Efficient Reinforcement learning via Language Abstractions

Deep Reinforcement learning has gained plenty of attention recently through mastering highly complex games, such as StarCraft II. Many difficult problems arise in such games, for example dealing with partial observability, or continuous state and action spaces. To cope with such problems, current algorithms require many interaction steps with the environment which renders them extremely sample inefficient. Human language is an efficient tool to pass on information and experiences from one generation to the next. Thus, language is inherently well-suited for constructing abstract concepts. Therefore, language can be used to compress information about the past, or provide high-level instructions for planning to improve sample efficiency of existing Reinforcement Learning algorithms. Naturally, incorporation of language also results in enhanced explainability of diverse algorithms. We aim to integrate techniques prevalently used in Natural Language Processing to enhance sample efficiency and explainability in the setting of Reinforcement Learning.

Track:
Academic Track
First Exchange:
February 1st, 2023 - August 1st, 2023
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.