Towards Trustworthy Deep Learning for Relational Data
Steve Azzolin (Ph.D. Student)
Following a fast initial breakthrough in graph-based learning, Graph Neural Networks (GNNs) have reached a widespread application in many areas of science. However, as many other deeplearning models, their inner working remains a so-called black box. The ability to understand the reason behind a prediction is a fundamental requirement for any decision-critical setting, being also a big issue for the transition of GNNs from benchmarks to high-stakes applications. Several works have proposed method to explain GNNs, yet we can not confidently assess whether their predictions are based on real patterns present in phenomenon under modelling or systematic shortcuts instead. This thesis will explore this topic from multiple perspectives, including the study of how to make the networks more human understandable, how to pose some guarantess on the behaviour of those, and the investigation of novel approaches to better align the discrete nature of graphs with the intrisically continous GNN paradigm.
|Primary Host:||Bruno Lepri (FBK & MIT Media Lab)|
|Exchange Host:||Pietro Liò (University of Cambridge)|
|PhD Duration:||01 November 2023 - 01 November 2026|
|Exchange Duration:||01 November 2024 - 31 May 2025 - Ongoing|