Jae Myung Kim

Methods of feature attribution for interpretability

Jae Myung Kim (Ph.D. Student)

In this project, our goal is to design reliable machine learning methods that can be understood by humans. We propose to tackle this goal in three different ways: (1) Interpreting the decision of black-box AI models to make it transparent, (2) making self-explainable AI models for better reliability, and (3) aligning the explanation of AI models to human annotation. Based on our prior works that studied the robustness of black-box model to geometric transformation (AAAI’20) and self-explainable model by simple modification of Class Activation Map but with better explainability (ICCV’21), the first step is to propose a reliable explanation by estimating uncertainty over the explanation. We plan this as a collaborative project between the EML lab (lead by Prof. Akata) at the University of Tübingen who studies multimodal explanations and the Willow lab (co-lead by Prof. Schmid) who studies visual recognition.

Primary Host: Zeynep Akata (University of Tübingen)
Exchange Host: Cordelia Schmid (INRIA)
PhD Duration: 01 August 2021 - 31 July 2024
Exchange Duration: 01 February 2024 - 31 July 2024 - Ongoing