Self-Supervised and Continual Robot Learning in the Wild
Nick Heppert (Ph.D. Student)
Assistive robots should be able to generalize their behaviour to diverse environments, including unseen ones. As each environment is structured differently and has unique features, it is infeasible to collect real world training data that covers the true distribution of all structures and features. Our research goal is to approach this generalization problem using efficient, continual robot learning and skill adaptation. More concretely, we will leverage demonstrations in self-supervised methods to bootstrap the learning capabilities of our system. First, we will extract task information for the learning process from the given demonstrations. Second, using this information we will automatically set up a simulation in which the robot learns a scalable, general solution to the problem. Third, to close the loop, we will develop techniques that allow the robot to verify the simulator in a self-supervised manner against the given demonstrations. Additionally, we will use new, incoming observations as we explore the environment to align the simulator with it accordingly and add them to the set of our demonstrations. Combining these techniques will allow us to generalize across different environments from a few demonstrations while also continually incorporating more data in the process.
|Primary Host:||Abhinav Valada (University of Freiburg)|
|Exchange Host:||Danica Kragic (KTH Royal Institute of Technology)|
|PhD Duration:||01 September 2022 - 01 September 2026|
|Exchange Duration:||01 May 2024 - 01 October 2024 - Ongoing|