Quoc-Anh Nguyen
Bayesian approaches to active learning, optimization, and experimental design provide principled frameworks for data-efficient decision-making. However, their widespread adoption is hindered by scalability challenges, as traditional Bayesian inference and sequential selection strategies can be computationally intensive in large-scale settings. Recent advances in amortized methods address this limitation by learning offline policies that map observed data to query decisions efficiently at test time.
While amortization improves scalability, it introduces new robustness challenges. Policies trained offline may degrade when deployed on data that differs from the training distribution. Beyond amortization, Bayesian decision-making pipelines are also sensitive to prior assumptions or model misspecification, which can undermine their reliability in practical settings.
This PhD project aims to develop robust, and scalable methodologies that unify Bayesian active learning, optimization, and experimental design. We will explore semi-amortized and policy-based approaches that combine the speed of offline inference with the adaptability of online updates. In parallel, the project will investigate mechanisms for incorporating human knowledge and domain expertise into the decision-making loop, through interactive feedback, or expert-guided constraints. By enabling meaningful interaction between human experts and learning systems, we aim to improve interpretability, mitigate model bias, and increase resilience to data distribution shifts.
The outcome will be a suite of methods that operate reliably under real-world conditions, bridging the gap between scalable Bayesian learning and trustworthy human-centered AI. This work aligns with the broader vision of making intelligent systems that not only learn efficiently, but also collaborate effectively with scientific and engineering experts.