Egocentric Vision for Advanced Human-Robot Cooperation
Simone Peirone (Ph.D. Student)
In recent times, research in robotics devoted a significant effort to the development of compliant manipulators capable of interacting with the surrounding environment safely and effectively. However, while many studies focused on pushing the limits of human-robot interaction with novel control, planning, and task allocation strategies, little has been done to enable an intelligent (i.e. not pre-programmed in advance) cooperation. This is a challenging problem because -for a purposeful cooperation- the robot is required to implement various capabilities, such as: i) learning to interact with a world made for humans; ii) understand and predict human actions; iii) decide on-the-fly how to favour the human activity. To reach this goal, egocentric vision seems to represent a key enabling technology. Indeed, dealing with first person videos comes with the benefit that the data source already embeds an intrinsic attention mechanism, driven by the focus of the user, and can serve as prior for human-inspired skills learning. To effectively learn how to encode human skills from egocentric videos, this thesis will investigate the feasibility to identify atomic actions from the complexity of unstructured daily living behavior. The definition of these building blocks will serve the twofold purpose of i) providing a better understanding of human behavior, helping deep neural methods to better recognize and forecast human activities and ii) enable a more efficient and accurate skill transfer as combination of atomic actions toward intelligent manipulators.
|Primary Host:||Giuseppe Averta (Politecnico di Torino)|
|Exchange Host:||Pascal Frossard (EPFL)|
|PhD Duration:||01 November 2022 - 31 October 2025|
|Exchange Duration:||01 January 2025 - 30 June 2025 - Ongoing|