Safety and robustness in reinforcement learning
Matteo Turchetta (Ph.D. Student)
Reinforcement learning has achieved impressive results in recent years through learning by trial and error. However, many real-world applications are subject to safety constraints that should not be violated at any time. In these cases, autonomous agents that can reason about safety while exploring and learning about their environment are necessary. In my research, I combine ideas from control theory and machine learning to build provably safe learning agents.
|Primary Host:||Andreas Krause (ETH Zürich)|
|Exchange Host:||Sebastian Trimpe (Max Planck Institute for Intelligent Systems)|
|PhD Duration:||26 September 2016 - 31 March 2021|
|Exchange Duration:||26 September 2018 - 26 September 2019|