Lena Zellinger

PhD
University of Edinburgh
Efficient and Reliable Neuro-Symbolic Reasoning and Learning

Guaranteeing the safety and reliability of deep learning and generative models is of crucial importance nowadays. Importantly, model predictions should comply with task-relevant constraints, such as safety rules or domain knowledge. A key objective of the PhD will be understanding how and when we can satisfy constraints in our deep learning pipelines within the framework of neuro-symbolic AI and from a Bayesian perspective. At the same time, the resulting methods should be efficient, which implies that they require few computational resources in practice and have low computational complexity in theory. Possible research directions include designing neural networks that are provably reliable, can satisfy constraints by design, and allow for tractable uncertainty quantification. Such models can be thought of as being "trustworthy by design" and should exhibit improved performance and robustness. Additionally, we will look into ways to abstract complex problems and models into interpretable symbolic representations. Overall, we aim to scale reasoning and learning while preserving reliability and efficiency.

Track:
Academic Track
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.