Noah Rothenberger
This PhD project investigates methods for robust 3D reconstruction from sparse multi-view imagery captured under varying and challenging illumination conditions. Traditional pipelines such as multi-view stereo (MVS) rely on photometric consistency across views - assuming constant illumination during capture - and thus degrade significantly when lighting variations, shadows, or specularities are present. Recent advances in neural rendering and explicit scene representations, including Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), have achieved remarkable progress in novel view synthesis and reconstruction fidelity. However, these approaches remain sensitive to uncontrolled lighting and often entangle geometry with appearance in their underlying representations, limiting their robustness in real-world scenarios.
This research will explore the fundamental challenge of disentangling geometry from reflectance and illumination properties. By investigating both classical approaches and emerging learning-based techniques, the project aims to design reconstruction methodologies that remain stable across diverse lighting conditions while preserving geometric detail and accuracy. The work will explore how varying illumination affects reconstruction quality and investigate potential solutions that can generalize beyond controlled environments, combining insights from physics-based modeling with data-driven methods where appropriate.