Ivona Najdenkoska

Learning from Context with Generative Models

Ivona Najdenkoska (Ph.D. Student)

Empowered by the large-scale pre-training on huge webscraped datasets, large language models (LLMs) have witnessed major advancements in recent years. These models demonstrate fascinating emergent abilities, particularly learning from a few examples presented in prompt with no gradient updates, also known as in-context learning. Such models have evolved from the natural language processing domain to visual language (VL) models, with models like Flamingo and Frozen as a notable examples. However, they rely heavily on incorporating very large, proprietary LLM backbones, ranging from 7 up to 70 billion parameters, making them ineffiecient for many individuals and organizations. In this project we seek to investigate whether the training process or the model scale are the main factors for the emergence of in-context learning in VL models. Additionally we explore whether such learning paradigm can be transfered to different VL tasks, such as image generation.

Primary Advisor: Marcel Worring (University of Amsterdam)
Industry Advisor: Xiantong Zhen (University of Amsterdam & Inception Institute of Artificial Intelligence)
PhD Duration: 01 April 2020 - 17 August 2024