Markel Zubia
Sequential decision making under uncertainty is a central challenge across computer science, artificial intelligence, and control theory. A principled way of addressing this challenge is through probabilistic model checking, which provides formal methods for reasoning about the correctness, performance, and safety of stochastic systems. This thesis investigates the theoretical foundations of probabilistic model checking, with a focus on a wide spectrum of models including discrete-time Markov chains, Markov decision processes, stochastic games, probabilistic automata, and hidden Markov models. Our work emphasizes rigorous analysis of algorithms and models that capture both quantitative performance guarantees and qualitative system properties. Particular attention is devoted to the notions of robustness and resilience, which extend classical verification by accounting for adversarial or uncertain environments. Robustness asks how strategies can be designed to perform well under worst-case transition dynamics, while resilience seeks to quantify the degree of disturbance required to invalidate a strategy or property. These notions allow us to move beyond idealized system models and address the fragility that often arises in practical verification scenarios. Furthermore, we explore the integration of PAC (Probably Approximately Correct) learning techniques into probabilistic verification, bridging the gap between formal guarantees and data-driven approximations. Together, these directions aim to develop a more comprehensive theory of verification and synthesis that is both mathematically rigorous and practically applicable to complex stochastic systems.