no image

Efficient Learning and Rendering of Neural Representation for Mobile Devices

Stefano Esposito (Ph.D. Student)

Neural Radiance Fields (NeRFs) can create images of 3D scenes from new perspectives, but real-time rendering is hard. Some methods for real-time rendering exist, but they have trade-offs and require high-end GPUs. This study aims to develop training procedures and rendering algorithms for neural implicit surfaces suitable for mobile devices like VR and AR headsets while balancing geometry and appearance representation accuracy. In this context, recent state-of-the-art approaches are all limited in some way. MobileNeRF, for example, can render volumetric scenes on mobile devices, but its textured polygon "soup" geometry tends to show artefacts in most cases. BakedSDF, on the other hand, can be baked to a high polygon count mesh but models poor view-depending effects. Furthermore, NeRF-like models tend to "cheat" when representing view-dependant effects, exploiting surface transparency to reach high values on image reconstruction metrics. SDF-based approaches cannot exploit such tricks, and being much more limited usually tend to perform worst. Investigating the potential benefits of a mixture-of-implicit-surfaces representation could improve how baked meshes handle realistic view-dependent and soft-surfaces effects. Constraining such surfaces to be smooth will make it easier to simplify the exported mesh, tailoring the number of triangles to the target device's capabilities. Overall, this study is going to contribute to the advancement of neural implicit surface reconstruction and rendering for practical use on mobile devices, making significant strides in this field.

Primary Advisor: Andreas Geiger (University of Tübingen & Max Planck Institute for Intelligent Systems)
Industry Advisor: Peter Kontschieder (Meta)
PhD Duration: 01 May 2023 - 30 April 2026