Andrea Dittadi

Representation Learning with Deep Generative Models

Andrea Dittadi (Ph.D. Student)

Learning useful representations from data with little or no supervision is a key challenge in artificial intelligence. Firstly, while labeled data is typically expensive, vast amounts of unlabeled data are available. Secondly, although the usefulness of a representation depends on the downstream task, it should be possible to learn a general-purpose representation of data that can be effectively applied to various tasks. In my PhD, I am tackling this representation learning problem with deep generative models. The focus of my exchange will be disentangled representation learning with variational autoencoders using weak labels, or no labels at all. I will investigate whether current methods can be successfully scaled up to a robotics setting, and whether disentangled representations are useful for downstream tasks in reinforcement learning, including transfer from simulation to the real world.

Primary Host: Ole Winther (University of Copenhagen & Technical University of Denmark)
Exchange Host: Bernhard Schölkopf (ELLIS Institute Tübingen & Max Planck Institute for Intelligent Systems)
PhD Duration: 15 March 2018 - 15 April 2022
Exchange Duration: 23 February 2020 - 31 August 2020 - Ongoing