Interpretable uncertainty in NLP systems using human uncertainty
Joris Baan (Ph.D. Student)
Accurate and reliable representations of uncertainty are crucial in trustworthy NLP systems. For example, to pass uncertain predictions on to human experts or convey uncertainty to users to help them interpret predictions. However, it is not straightforward to evaluate uncertainty, partly because uncertainty is rarely observable. This project aims to quantify human/data uncertainty by exploiting multiple annotations per data point, and to leverage it to investigate when and why NLP models are uncertain, and whether their uncertainty estimates are close to those observed in human data. We will develop methods to evaluate and interpret uncertainty estimates of NLP systems, and, simultaneously, methods that incorporate and deal with human uncertainty on the instance level.
Primary Host: | Raquel Fernández (University of Amsterdam) |
Exchange Host: | Barbara Plank (LMU Munich & IT University of Copenhagen) |
PhD Duration: | 01 October 2021 - 30 September 2025 |
Exchange Duration: | 01 February 2022 - 31 May 2022 - Ongoing |