Learning real-world perception in simulations
Xu Chen (Ph.D. Student)
The difficulty of acquiring annotated real-world data has limited the applicability of deep learning in many computer vision tasks. As one way to overcome this limitation, training deep networks with synthetic images from simulation has demonstrated its potential. However, current simulations still lack diversity and realism, resulting in significant performance degradation of the trained networks. The goal of this project is to study ways to close this performance gap between deep networks trained with real-world data and those trained with simulated data, especially for human-centric tasks. One the one hand, I am interested in learning-based image generation with geometric control in order to produce more diverse and realistic synthetic data. On the other hand, I also work on generative approaches that combine model fitting with deep networks for better generalizability from simulation to the real-world.
Primary Host: | Otmar Hilliges (ETH Zürich) |
Exchange Host: | Andreas Geiger (University of Tübingen & Max Planck Institute for Intelligent Systems) |
PhD Duration: | 01 March 2019 - 28 February 2023 |
Exchange Duration: | 01 January 2021 - 31 December 2021 - Ongoing |