Causality of Enhanced Model Interpretability
Amir-Hossein Karimi (Ph.D. Student)
As machine learning is increasingly used to inform decision-making in consequential real-world settings (e.g., pre-trial bail, loan approval, or prescribing life-altering medication), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. My thesis objective is to study, design, and deploy methods to address the second question, specifically on generating counterfactual explanations (https://arxiv.org/abs/1905.11190) and minimal interventions (https://arxiv.org/abs/2002.06278). Thus my focus is on the intersection of machine learning interpretability, causal and probabilistic modelling, and social philosophy and psychology.
Primary Host: | Bernhard Schölkopf (ELLIS Institute Tübingen & Max Planck Institute for Intelligent Systems) |
Exchange Host: | Thomas Hofmann (ETH Zürich) |
PhD Duration: | 01 October 2018 - 31 December 2022 |
Exchange Duration: | 01 September 2020 - 31 August 2021 - Ongoing |