Fully Authentic Visual Question Answering Dataset from Online Communities

Chongyan Chen*, Mengchen Liu, Noel C Codella, Yunsheng Li, Lu Yuan, Danna Gurari ;

Abstract


"Visual Question Answering (VQA) entails answering questions about images. We introduce the first VQA dataset in which all contents originate from an authentic use case. Sourced from online question answering community forums, we call it VQAonline. We characterize this dataset and how it relates to eight mainstream VQA datasets. Observing that answers in our dataset tend to be much longer (i.e., a mean of 173 words) and so incompatible with standard VQA evaluation metrics, we instead utilize popular metrics for longer text evaluation for evaluating six state-of-the-art VQA models on VQAonline and report where they struggle most. Finally, we analyze which evaluation metrics align best with human judgments. We publicly-share the dataset at: https://vqaonline.github.io/."

Related Material


[pdf] [supplementary material] [DOI]