概要
With the emergence of powerful models such as ChatGPT, generative models for NLP tasks have drawn more and more attention. While successful, these models show undesired behavior called hallucination - generating a piece of text that contains nonsensical or unfaithful to the provided source content. In this talk, based on the review paper our lab published, I will provide a brief overview of hallucination in NLG tasks, and then dig into the related works in dialogue systems.
※トークは日本語です。