Four PhD Positions in Natural Language Processing, AI and LLM Security (CPH) at Aalborg University
The positions are open for appointment from May 1, 2026 or soon thereafter - starting dates after the summer of 2026 are also possible. The duration of the positions are three years each. The positions are at AAU’s Copenhagen campus, located at the waterfront in Sydhavnen.
The positions are hosted within the Natural Language Processing (NLP) research group at AAU Copenhagen, and are supervised by Professor Johannes Bjerva. The NLP group is a rapidly expanding research environment with strong activities in multilingual NLP, LLM security, linguistically informed approaches to NLP, factuality in LLMs, and socially sustainable AI. The group is supported by several major national and international research grants and is currently scaling up its research capacity – in 2026, we are expanding with four postdocs who are expected to work closely with the ph.d. students hired in this call. As a PhD student, you will join a highly collaborative, international, and ambitious research community, with access to state-of-the-art computational resources. The positions feature the possibility of going on one or more longer stays abroad to facilitate knowledge exchange with top research environments worldwide. The positions are funded by a DFF: Sapere Aude project (“Building TRUST in Text: Linguistically Motivated Language Model Detection”) and an NNF: Ascending Data Science Investigator project (“LM2-SEC: Linguistically Motivated Language Model Security”).
Your work tasks
As a PhD student, you will conduct research at the intersection of Security, Safety, and Privacy in Natural Language Processing and AI, with a particular focus on Large Language Models (LLMs) and LLM-based systems.
Possible research directions include (but are not limited to):
Detection and mitigation of backdoors, data poisoning, and adversarial inputs in LLMs
Detection of LLM-generated text, in particular in situations where a model has been compromised but is generating bening outputs, e.g. via computational linguistic analysis of output space distributions
Linguistically motivated methods for analysing and securing LLM behaviour, including, e.g., LLM misbehaviour and memorization in LLMs
Analysis of vulnerabilities in low-resource language settings, and the effect of typological diversity on the security landscape in multilingual settings
Formal semantic or symbolic methods for monitoring, evaluating, and improving LLM robustness.
The positions are embedded in an active and rapidly growing research environment, including ongoing projects on, e.g., AI security, linguistically motivated NLP, and knowledge-graph grounded factuality in LLM. The PhD students will work both independently and collaboratively within the group, and will have opportunities to engage with national and international partners. As a part of the projects funding these positions, we plan to host internationally leading researchers at AAU to provide further opportunities for knowledge exchange and collaboration.