Towards Trustworthy Amortized Bayesian Inference with Deep Learning
Marvin Schmitt (Ph.D. Student)
Recent advances in probabilistic deep learning gave rise to amortized Bayesian inference (ABI), which frames Bayesian inference as a two-stage approach consisting of (i) a lengthy upfront training phase, where generative neural networks learn the conditional density distribution of the posterior; and (ii) an inference phase where the networks approximate the model’s posterior distribution almost instantaneously. However, current ABI methods lack the sampling guarantees of gold-standard Markov chain Monte Carlo (MCMC) samplers. While ABI research has primarily focused on enhancing accuracy and exploring new generative models, the assessment of algorithmic trustworthiness has received little attention. This project aims to bridge the fields of (i) computational Bayesian statistics, which offers a large body of research on trustworthy samplers and (ii) deep learning with its promising repertoire of fast bleeding-edge neural density estimators. If successful, this project will contribute to the development of neural samplers equipped with trustworthy diagnostics. These diagnostics will serve to alert users when the validity of their results is potentially compromised, or even provide robust samplers under reasonable conditions.
Primary Host: | Paul-Christian Bürkner (TU Dortmund University & University of Stuttgart) |
Exchange Host: | Aki Vehtari (Aalto University & Finnish Centre for AI) |
PhD Duration: | 01 December 2021 - 30 November 2024 |
Exchange Duration: | 01 March 2024 - 31 August 2024 - Ongoing |