Provable Robustness of Intelligent Systems for Realistic Threats and Tasks
Tobias Lorenz (Ph.D. Student)
The trustworthiness of intelligent systems is prudent for their deployment in many real-world environments. A major aspect of such trustworthy systems is to ensure their reliability under adverse conditions, even in the presence of malicious adversaries. Unfortunately, there is an abundance of techniques to evade and mislead modern machine learning approaches with only very few known defenses. Recently, methods for certification and verification have emerged that can provide deterministic bounds or statistical guarantees that ensure the stability of the output under small input perturbations. However, there is still a large disconnect between the theoretical results obtained to date and requirements in real deployment scenarios. To bridge this gap, we focus on the following key challenges: (1) Current techniques predominantly focus on existing network architectures, which are optimized for accuracy rather than reliability. For more reliable systems, we need to equally consider performance as well as robustness when designing and training network models. (2) Typical results are given for simple, synthetic perturbations and lack real-world reference. Hence, we will investigate more relevant and semantic variations that still allow for thorough analysis. (3) We complement our analysis with an experimental part that encompasses techniques with different approach angles and expand the evaluation across multiple domains - in particular those where certification and verification matter most.
|Mario Fritz (CISPA Helmholtz Center for Information Security, Saarland University)
|Marta Kwiatkowska (University of Oxford)
|01 April 2021 - 31 July 2025
|01 May 2023 - 31 October 2023 - Ongoing