Abdullatif Köksal

Few-shot Learning with Pretrained Language Models

Abdullatif Köksal (Ph.D. Student)

Large pretrained language models (PLMs) perform well on many NLP tasks without supervised training. This phenomenon is best demonstrated in GPT-3 by its translation capability from different languages to English with just 64 contextual examples. On the other hand, contextual examples enable PLMs to work with multiple tasks without parameter updates. Considering these benefits, we will analyze the few-shot learning capabilities of PLMs in a wide variety of NLP tasks. We will systematically explore what factors contribute to the good performance of PLMs with contextual examples. We will adapt and propose active learning selection strategies to check which samples help the model perform better. Furthermore, we will compare in-context learning and prompt finetuning paradigms with few-shot examples.

Primary Host: Hinrich Schütze (LMU Munich)
Exchange Host: Anna Korhonen (University of Cambridge)
PhD Duration: 01 January 2022 - 01 January 2025
Exchange Duration: 01 January 2024 - 01 July 2024 - Ongoing