Retrieval-based Language Models and Applications

2023/05/31 (Wed) 12:00 (JST)

浅井明里 / Akari Asai (University of Washington)

[Webサイト]

Akari Asai is a Ph.D. student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, advised by Prof. Hannaneh Hajishirzi. Her research lies in natural language processing and machine learning. Her recent research focuses on question answering, multilingual NLP, and NLP efficiency. She received the IBM Fellowship in 2022 and the Nakajima Foundation Fellowship in 2019. Prior to UW, she obtained a B.E. degree in Electrical Engineering and Computer Science from the University of Tokyo.

概要

Large language models (LMs) have demonstrated remarkable capabilities in various natural language processing (NLP) tasks. However, depending entirely on their parameters to encode a vast amount of world knowledge necessitates an unfeasibly large number of parameters and thus massive computing. Furthermore, they frequently struggle to acquire long-term knowledge, or their knowledge becomes outdated, resulting in prevalent issues like hallucinations. To overcome these limitations, there is an increasing interest in retrieval-based LMs, which integrate a non-parametric datastore (such as text chunks from an external corpus) with their parametric counterparts. In this presentation, I first introduce our ACL 2023 paper, which examines the effectiveness of retrieval-based NLPs in the long tail, and then provide a comprehensive and coherent overview of recent developments in retrieval-based LMs. We will conduct a tutorial on this subject at ACL 2023. ACL2023で発表予定の論文を軽く紹介した上で,ACLで行う予定のチュートリアルの概要版を話す予定です.

※トークは日本語です。

[スライド] [論文] (ACL 2023) [チュートリアル] (ACL 2023)

メーリングリストへの登録: 参加用URLなどNLPコロキウムに関するお知らせを受け取りたい方はメーリングリストへのご登録をお願いします。

メーリングリスト登録フォーム

[トップページへ戻る]