P-Tuning: A parameter efficient tuning to improve the LLM performance of Zenodia Charpy

P-Tuning: A parameter efficient tuning to improve the LLM performance of Zenodia Charpy

HomeGAIAP-Tuning: A parameter efficient tuning to improve the LLM performance of Zenodia Charpy
P-Tuning: A parameter efficient tuning to improve the LLM performance of Zenodia Charpy
ChannelPublish DateThumbnail & View CountDownload Video
Channel AvatarPublish Date not found Thumbnail
0 Views
P-Tuning: A parameter efficient tuning to improve the LLM performance of Zenodia Charpy

As more and more LLMs become available, industries need techniques to solve real-world natural language tasks. Model prompting methods have been shown to achieve good zero- and few-shot performances of LLMs and can help achieve high-quality results on various downstream natural language processing (NLP) tasks. However, this has its limitations. In this talk, we show how P-Tuning, a prompt learning method, can be adapted to resource-poor language environments. We use an improved version of P-Tuning implemented in NVIDIA NeMo that enables continuous multitask learning of virtual prompts. In particular, we focus on adapting our English P-Tuning workflow to Swedish.

Zenodia Charpy is an experienced deep learning data scientist working at NVIDIA. Her area of expertise is training and deploying very large language models, focusing on tackling challenges for non-English and low-resource languages such as Swedish, Danish, Norwegian, and many others. In addition, she researches parameter-efficient tuning techniques to increase the performance of LLMs while ensuring the factual accuracy of LLM answers.

Recorded at the GAIA Conference 2023 on April 5 at Svenska Mässan in Gothenburg, Sweden.

Please take the opportunity to connect with your friends and family and share this video with them if you find it useful.