Guillem Ramirez Santos
PhD
University of Edinburgh
Studying generalisation using prompting for data augmentation

For the last five years, a new level of performance in NLP has been achieved by leveraging unsupervised training of large pretrained language models (PLM) and then fine-tuning on a downstream task. However, this is changing recently, as the community is shifting its attention towards prompting. The success of prompting shows that some PLMs have a good generalisation ability, meaning that they are able to understand and solve a task from limited examples or a brief description. I want to study a data augmentation method that is based on the generalisation ability; I believe this task will provide interesting insights on to what degree PLMs are able to generalise and extract commonalities between sentences.

Track:
Academic Track
PhD Duration:
September 12th, 2022 - August 31st, 2026
First Exchange:
January 1st, 2024 - July 1st, 2024
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.