Joris Baan
PhD
University of Amsterdam (UvA)
Interpretable uncertainty in NLP systems using human uncertainty

Accurate and reliable representations of uncertainty are crucial in trustworthy NLP systems. For example, to pass uncertain predictions on to human experts or convey uncertainty to users to help them interpret predictions. However, it is not straightforward to evaluate uncertainty, partly because uncertainty is rarely observable. This project aims to quantify human/data uncertainty by exploiting multiple annotations per data point, and to leverage it to investigate when and why NLP models are uncertain, and whether their uncertainty estimates are close to those observed in human data. We will develop methods to evaluate and interpret uncertainty estimates of NLP systems, and, simultaneously, methods that incorporate and deal with human uncertainty on the instance level.

Track:
Academic Track
PhD Duration:
October 1st, 2021 - September 30th, 2025
First Exchange:
February 1st, 2022 - May 31st, 2022
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.