Dimitrios Karageorgiou
Recent advances in generative artificial intelligence have enabled the creation of highly realistic synthetic images and videos, raising concerns about their potential misuse for malicious purposes. Although several detection approaches, mostly based on deep learning, have emerged in response, they perform poorly in real-world conditions. This is because currently known forensic traces vary significantly across different generators and are highly sensitive to post-processing operations. As a result, any learned decision boundary quickly becomes obsolete. Common strategies, such as scaling up model and dataset sizes or expanding the augmentation space, provide only temporary solutions-at the cost of exponentially increasing development efforts. This undermines the viability of such detectors in high-stakes environments, where reliability is essential and errors carry significant consequences.
This PhD research aims to advance the analysis of Al-generated media in open-ended environments by jointly modeling forensic artifacts and uncertainty estimation. Instead of learning a fixed decision boundary, it formulates Al-generated media detection as an out-of-distribution detection problem by modeling invariant patterns of authentic content. Developing suitable uncertainty estimation methods is a core component of this work, both to support the proposed detection framework and to improve robustness in settings where epistemic and aleatoric uncertainty cannot be fully reduced.
Furthermore, this project seeks to extend synthetic media analysis beyond binary classification, enhancing its granularity. It also aims to leverage the developed uncertainty estimation methods to create a test-time optimization framework for fundamental computer vision tasks, such as instance segmentation. The estimated duration of this PhD research project is four years.