Kim Isabella Zierahn
This PhD thesis explores approaches for enhancing human-chatbot relationships. It explores whether and how chatbots based on large language models (LLMs) should be personalized to their users' psychological profiles to enable more adaptive, trustworthy and emotionally relevant interactions. The thesis introduces computational methods for user modeling based on linguistic features, interaction history and self-reported traits, while incorporating privacy-preserving techniques, such as local inference to protect sensitive user information. From a technical perspective, the thesis will explore fine-tuning strategies, prompt-based adaptations and the potential development of personas for aligning LLM behavior with individual user profiles, as well as mechanisms for automatically adapting the communication style of an LLM-based conversational agent or chatbot to make the interaction more relevant, trustworthy and effective. As part of this thesis, we plan de carry out user studies to assess how personalization impacts trust, engagement, perception and task efficacy. It will explore the potential application to mental health scenarios. While long-term dynamics, such as developments of user behavior, the potential for dependency and the loss of cognitive functions (such as critical thinking), will be discussed conceptually, long-term empirical evaluations are bounded given the time frame of the thesis. The thesis addresses the duality of enhanced human-chatbot relationships: while personalization of LLM-based chatbot interactions can build trust and improve outcomes in sensitive applications, like mental health, it simultaneously creates risks of manipulation, dependency, and impairment of critical thinking and metacognitive awareness. This work contributes with new approaches for building human-centric AI systems and offers practical design guidance for deploying such systems in sensitive, high-impact domains, such as mental health.