Modeling Humans in the Scene with Compositional and Controllable Neural Representations
Zijian Dong (Ph.D. Student)
This project aims to develop generative 3D models for controllable 3D synthetic humans in a scene. The first part of the project is to generate one animatable neural human avatars given only a few RGB/RGBD images of a person from different views. The second part of the project is to represent a scene with compositional representations and control the generation of different objects. The third part of the project is to generate human avatars naturally to the scene with interaction. Technically speaking, I will explore the controllable and compositional 3D representation for scenes and humans. Applicably speaking, this can be used to generate synthetic simulation data and enable better understanding of human scene interaction.
|Primary Host:||Otmar Hilliges (ETH Zürich)|
|Exchange Host:||Andreas Geiger (University of Tübingen & Max Planck Institute for Intelligent Systems)|
|PhD Duration:||01 July 2021 - 30 June 2025|
|Exchange Duration:||01 January 2022 - 01 January 2023 - Ongoing|