Giorgio Racca

PhD
University of Copenhagen
Trustworthy Language Generation

Large Language Models (LLMs) have emerged as a dominant paradigm in modern machine learning. Yet their theoretical understanding still lags behind that of traditional predictive models. Inspired by learning theory, a growing body of research has recently begun to explore the theoretical foundations of language generation. This newly introduced framework has opened the door to the formal study of practically important phenomena, such as the tension between hallucination and mode collapse. Our goal is to advance this emerging line of research by narrowing the theoretical gap between traditional predictive machine learning and large-scale generative models. Special emphasis will be placed on the notion of trustworthiness, particularly in relation to privacy guarantees. Alongside this theoretical investigation, complementary applied topics such as LLM fine-tuning and alignment will be explored during the exchange.

Track:
Academic Track
PhD Duration:
September 1st, 2025 - August 31st, 2028
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.