Quality Assured: Rethinking Annotation Strategies in Imaging AI
Tim Rädsch*, Annika Reinke, Vivienn Weru, Minu D. Tizabi, Nicholas Heller, Fabian Isensee, Annette Kopp-Schneider, Lena Maier-Hein*
;
Abstract
"[width=1]figures/fig1l owr es.png Figure 1: Research Questions (RQs) tackled in this work. Based on 57,648 instance segmentation masks annotated by 924 annotators and 34 quality assurance (QA) workers from five different annotation providers, we (1) compared the effectiveness of generating high-quality annotations between annotation companies and Amazon Mechanical Turk and (2) investigated the effects of annotation companies’ internal QA and (3) real-world image characteristics on the annotation quality. This paper does not describe a novel method. Instead, it studies an essential foundation for reliable benchmarking and ultimately real-world application of AI-based image analysis: generating high-quality reference annotations. Previous research has focused on crowdsourcing as a means of outsourcing annotations. However, little attention has so far been given to annotation companies, specifically regarding their internal quality assurance (QA) processes. Therefore, our aim is to evaluate the influence of QA employed by annotation companies on annotation quality and devise methodologies for maximizing data annotation efficacy. Based on a total of 57,648 instance segmented images obtained from a total of 924 annotators and 34 QA workers from four annotation companies and Amazon Mechanical Turk (MTurk), we derived the following insights: (1) Annotation companies perform better both in terms of quantity and quality compared to the widely used platform MTurk. (2) Annotation companies’ internal QA only provides marginal improvements, if any. However, improving labeling instructions instead of investing in QA can substantially boost annotation performance. (3) The benefit of internal QA depends on specific image characteristics. Our work could enable researchers to derive substantially more value from a fixed annotation budget and change the way annotation companies conduct internal QA."
Related Material
[pdf]
[supplementary material]
[DOI]