Chun-Peng Chang
Autonomous vehicles often struggle to generalize when deployed in environments that differ from their training conditions, due to domain gaps such as changes in geography, weather, or sensor modality. This PhD research aims to address this challenge by developing robust foundation models for camera, lidar and radar data. The core idea is to learn a shared intermediate representation that captures domain-invariant and semantically meaningful features across varied data sources. By mapping new sensor data into this representation, we facilitate domain adaptation and enable models to transfer effectively across conditions without extensive retraining. The project builds on recent advances in domain adaptation and robust representation learning, with the goal of improving generalization, reducing reliance on labeled data, and supporting scalable, cross-domain deployment of autonomous driving systems.