Lukas Aichberger
PhD
Johannes Kepler University Linz (JKU)
Quantifying uncertainty in deep learning models

Predictions made by machine learning models urge to be reliable. Understanding if a prediction is under-confident or falsely over-confident is critical for deploying machine learning, especially in real-world applications. However, extracting models from data is inseparably connected with uncertainty, due to the intrinsic and irreducible stochastic variability in the samples as well as the lack of knowledge about the underlying model parameters that best explain the data. This research thus aims to develop practical methods to identify sources of uncertainties during both optimization and decision-making processes. The goal is to conceptualize interpretable and trustworthy techniques that overcome inexplicable and overly confident predictions currently common in deep learning models.

Track:
Academic Track
PhD Duration:
October 17th, 2022 - October 16th, 2025
First Exchange:
October 14th, 2024 - April 13th, 2025
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.