Thumb ticker md photo 2019 11 11 23 54 03

Automated Synthesis of Counterfactual Interventions for Explainable and Fair Machine Learning

Giovanni De Toni (Ph.D. Student)

As humans, we make countless decisions every day in many different areas of our lives. Recently, state-of-the-art machine learning models have been used to build automated decision-making systems to enhance human judgment, ranging from accepting or rejecting a job applicant to prescribing medications and treatments. However, what does happen when an automated system gives us an unfair decision? Are we still in control, and can we explain what happened? Modern deep learning models are inherently black-box, making it hard to understand why we got certain predictions and how to act to change them. For this project, we aim to develop a novel model-agnostic theoretical framework to generate explanations for black-box deep learning models in a counterfactual fashion. We also seek to devise algorithms to foster practical applications of these techniques by exploiting neuro-symbolic and human-centric machine learning. Lastly, the explanations must be actionable, and they must offer viable interventions to the affected users to overturn their assigned decisions. By ensuring algorithmic recourse, we can also promote the fairness of these decision models and reduce potential bias.

Primary Host: Bruno Lepri (FBK & MIT Media Lab)
Exchange Host: Manuel Gomez Rodriguez (Max Planck Institute for Software Systems)
PhD Duration: 01 October 2021 - 31 October 2025
Exchange Duration: 01 January 2023 - 01 June 2023 - Ongoing