Towards robust and trustworthy Geometric Deep Learning
Andrei-Marian Manolache (Ph.D. Student)
In recent years, Geometric Deep Learning (GDL) has become an innovative and fruitful area of research in Machine Learning. A significant factor for its success stems from the fact that the framework of GDL enables exploiting structure in the data at a model level by leveraging symmetry and invariances, which can be used to enforce various model inductive biases. This approach has been particularly successful for data structured as graphs, such models already being employed in critical fields such as finance, computer security, and healthcare. However, having computer algorithms make decisions in high-stakes applications raises many ethical, legal, and moral concerns. In consequence, these algorithms should be trustworthy and robust. In the broader context of Deep Learning, several efforts have been to improve the models' robustness and to develop methods and tools that can aid the practitioner in explaining a model's predictions. However, these advancements can be sub-optimal, misleading, and even incompatible with GDL due to not considering the structure-informed inductive biases of the models. Therefore, the Ph.D. student’s main research will focus on the design and development of GDL methods where the robustness and trustworthiness of such models are an integral part of the research.
Primary Host: | Mathias Niepert (University of Stuttgart & NEC Labs Europe) |
Exchange Host: | Karsten Borgwardt (Max Planck Institute of Biochemistry) |
PhD Duration: | 01 October 2022 - 30 September 2025 |
Exchange Duration: | 01 October 2024 - 01 April 2025 - Ongoing |