When are Lemons Purple? The Concept Association Bias of Vision-Language Models

📅 2024/01/24 (Wed) 12:00–13:00 (JST)

@ online

🗣️ 山田祐太朗 / Yutaro Yamada(Yale University)

PhD student at Yale University
[Webサイト]

📝 When are Lemons Purple? The Concept Association Bias of Vision-Language Models

概要:Large-scale vision-language models such as CLIP have shown impressive performance on zero-shot image classification and image-to-text retrieval. However, such performance does not realize in tasks that require a finer-grained correspondence between vision and language, such as Visual Question Answering (VQA). We investigate why this is the case, and report an interesting phenomenon of vision-language models, which we call the Concept Association Bias (CAB), as a potential cause of the difficulty of applying these models to VQA and similar tasks. We find that models with CAB tend to treat input as a bag of concepts and attempt to fill in the other missing concept crossmodally, leading to an unexpected zero-shot prediction. We demonstrate CAB by showing that CLIP's zero-shot classification performance greatly suffers when there is a strong concept association between an object (e.g. eggplant) and an attribute (e.g. color purple). We also show that the strength of CAB predicts the performance on VQA. We observe that CAB is prevalent in vision-language models trained with contrastive losses, even when autoregressive losses are jointly employed. However, a model that solely relies on autoregressive loss seems to exhibit minimal or no signs of CAB.
[動画] [スライド] [論文] (EMNLP 2023)

※トークは日本語です。

🏃‍♀️ 参加方法

参加用のZoom URLはメーリングリストで配信しています。
ほかの宣伝投稿はありません。いつでも抜けられます。

メーリングリスト登録フォーム