Generative Models for Robust Vision
Olaf Dünkel (Ph.D. Student)
Machine learning models are validated and tested on fixed datasets under the assumption of independent and identically distributed samples, which may not fully reflect the models' true capabilities and potential vulnerabilities. These vulnerabilities can become evident when the model is tested in real-world scenarios. On the other hand, generative models, having witnessed substantial advancements in recent years, can produce realistic out-of-distribution samples. This project investigates basic foundational questions regarding integrating generative models with vision models. The goal is to develop strategies for more robust and stable vision systems and, at the same time, enhance generative models for this purpose by investigating their interplay. For example, we will explore new adversarial testing techniques and new means for fine-grained control of the data manifold of generative models.
Primary Host: | Christian Theobalt (Max Planck Institute for Informatics) |
Exchange Host: | Christian Rupprecht (University of Oxford) |
PhD Duration: | 02 October 2023 - Ongoing |
Exchange Duration: | - Ongoing - Ongoing |