Julien Colin
PhD
University of Alicante
Human-centric Explainable AI

The goal of Explainable AI (XAI) is to develop methods that explain to humans how AI models behave so humans (including non-experts) can understand the basis for the algorithm’s decisions. Within the computer vision literature, the most widely used explainable AI methods are attribution methods. Recent works on human evaluation of attribution methods in computer vision have highlighted that, (1) they seem to be good enough to explain ‘where’ the model is looking. However, there is an opportunity to expand the focus of the XAI field to the development of complementary explainability methods that tell us ‘what’ the model is seeing; and (2) we need new, human-centric benchmarks to score explainability methods as those currently used in the field are too disconnected from human cognition and understanding. In this PhD, we propose to address the two previously described areas of opportunity by proposing the use of generative models to develop novel explainability methods that complement the limitation of current attribution methods, and by taking inspiration from neuroscience to develop benchmarks that give us better insights into the practical utility of explainability methods. We plan to evaluate the proposed approaches both on benchmark datasets and real-life scenarios.

Track:
Academic Track
PhD Duration:
November 1st, 2022 - June 30th, 2026
First Exchange:
June 1st, 2024 - December 31st, 2024
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.