Jae Myung Kim
PhD
University of Tübingen
Methods of feature attribution for interpretability

In this project, our goal is to design reliable machine learning methods that can be understood by humans. We propose to tackle this goal in three different ways: (1) Interpreting the decision of black-box AI models to make it transparent, (2) making self-explainable AI models for better reliability, and (3) aligning the explanation of AI models to human annotation. Based on our prior works that studied the robustness of black-box model to geometric transformation (AAAI’20) and self-explainable model by simple modification of Class Activation Map but with better explainability (ICCV’21), the first step is to propose a reliable explanation by estimating uncertainty over the explanation. We plan this as a collaborative project between the EML lab (lead by Prof. Akata) at the University of Tübingen who studies multimodal explanations and the Willow lab (co-lead by Prof. Schmid) who studies visual recognition.

Track:
Academic Track
PhD Duration:
August 1st, 2021 - July 31st, 2024
First Exchange:
February 1st, 2024 - July 31st, 2024
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.