Towards a Unified 3D Scene Representation
Jonas Kulhanek (Ph.D. Student)
3D scene representations, or 3D maps, are an essential component of a wide range of intelligent systems, such as self-driving cars, robots, or virtual reality. A fundamental limitation of the current approaches is, however, that they are designed for a specific sensor setup which makes them difficult to share between applications. The goal of this thesis project is thus to develop a unified map representation. One possibility is to represent the scene via an implicit neural function, i.e., a function that takes a 3D point as input and outputs density and colour value. This type of data structure has been shown to be able to model detailed scene geometry with high fidelity and can be trained from any 3D data, as well as raw sensor measurements such as images. However, unlike current neural radiance field approaches, which are optimised per scene, we aim to be able to estimate or initialise such a data structure quickly for new scenes. This could be enabled by building a database of 3D geometry parts which can be queried efficiently. We hope that our 3D scene representation will bridge the barriers between different modalities and will enable large-scale applications of systems which would otherwise require difficult-to-obtain data.
|Primary Host:||Torsten Sattler (Czech Technical University)|
|Exchange Host:||Marc Pollefeys (ETH Zürich & Microsoft)|
|PhD Duration:||01 September 2021 - 31 August 2025|
|Exchange Duration:||01 January 2024 - 30 June 2024 - Ongoing|