Jihao Andreas Lin
PhD
University of Cambridge
Uncertainty Quantification in Deep Learning

In recent years, deep learning has achieved remarkable success at advancing complex tasks, such as computer vision and natural language processing. While deep learning models and algorithms are already being used in some industry and consumer applications, a lack of interpretability and trustworthiness is obstructing employment in sensitive environments and domains like traffic or medicine. To be interpretable and trustworthy, it is imperative for a deep learning model to be capable of quantifying the uncertainty of its predictions. In particular, this uncertainty quantification should be well-calibrated such that the model is confident about its prediction if it is correct and uncertain if the prediction cannot be reasonably inferred from the observed data (epistemic uncertainty) or if the prediction is inherently noisy (aleatoric uncertainty). Therefore, an overarching research question of interest to me is: How can we create universal function approximators, such as neural networks, with well-calibrated uncertainty estimation?

Track:
Academic Track
PhD Duration:
October 1st, 2022 - September 30th, 2026
First Exchange:
October 1st, 0205 - September 30th, 2026
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.