Jonas Klesen
PhD
Saarland University (UdS)
Interpretable Machine Learning

Issues around the interpretability of Machine Learning systems are of growing concern. Unfortunately, the research community up until now has failed to converge on established methods for arriving at powerful yet interpretable models. Even worse, and as opposed to similar desiderata of Machine Learning systems, such as robustness and privacy, there is not even consensus on how to properly define let alone measure the interpretability of a Machine Learning system. My thesis strives to not only make progress on methods for arriving at interpretable machine learning models, but also on the underpinnings of the research area, including its raison d’etre, its metrics and its definitions.

Track:
Academic Track
PhD Duration:
September 1st, 2021 - September 1st, 2024
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.