Diego Miguel Lozano

PhD
ELLIS Alicante Unit Foundation
University of Alicante
Towards Trustworthy MLLMS: Safety and Security of Multimodal Large Language Models (MLLMs)

As Multimodal Large Language Models (MLLMs) become increasingly common in real-world applications, it is crucial to reveal, evaluate, and mitigate any possible safety and security risks, biases, and model misalignments. These models-which integrate various data modalities-can inadvertently propagate existing biases or generate harmful outputs if not properly managed. Furthermore, their multimodal nature, coupled with their tight integration into broader systems, results in an increased attack surface, opening the door to malicious actors to perform a wide range of attacks with the goal of disrupting services, stealing information, generating misinformation, spam, or phishing content, and many others. This PhD thesis aims to contribute to the topic of Trustworthy MLLMs by identifying the specific vulnerabilities related to MLLMs, developing robust frameworks for assessing their safety, and proposing concrete and actionable ways of mitigating their risks. In sum, this thesis work paves the way towards the development of trustworthy MLLMs.

Track:
Academic Track
PhD Duration:
November 1st, 2025 - September 30th, 2029
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.