Wolfgang Böttcher
As our world is inherently 3-dimensional, the understanding of 3D scenes remains an important pillar for a variety of applications such as autonomous cars, robotics, and augmented reality. We aim to investigate how various downstream tasks such as visual grounding, (amodal) semantic segmentation, and other semantic tasks that allow to interpret the scanned environment can benefit from 2D-3D reconstruction and representation methods. To this end, we also aim to exploit the properties of 3D representations to provide guidance and support for 2-dimensional tasks, and vice versa. In addition, 3D resconstruction from 2D images has made great progress in recent years. This has increased the versatility of 3D information sources for perception tasks beyond the established sparse sensors such as LiDAR, thus opening up new research directions. In addition, much work is focused on the perception of dynamic scenes. For example, working with LiDAR point clouds that typically remove points from moving objects. However, to model our world faithfully and provide general 3D perception models, it is necessary to consider dynamic scenes as well.