Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Karsten Ingmar Paul
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Computational Linguistics (2011) 37 (4): 699–725.
Published: 01 December 2011
Abstract
View article
PDF
Recent discussions of annotator agreement have mostly centered around its calculation and interpretation, and the correct choice of indices. Although these discussions are important, they only consider the “back-end” of the story, namely, what to do once the data are collected. Just as important in our opinion is to know how agreement is reached in the first place and what factors influence coder agreement as part of the annotation process or setting, as this knowledge can provide concrete guidelines for the planning and set-up of annotation projects. To investigate whether there are factors that consistently impact annotator agreement we conducted a meta-analytic investigation of annotation studies reporting agreement percentages. Our meta-analysis synthesized factors reported in 96 annotation studies from three domains (word-sense disambiguation, prosodic transcriptions, and phonetic transcriptions) and was based on a total of 346 agreement indices. Our analysis identified seven factors that influence reported agreement values: annotation domain, number of categories in a coding scheme, number of annotators in a project, whether annotators received training, the intensity of annotator training, the annotation purpose, and the method used for the calculation of percentage agreements. Based on our results we develop practical recommendations for the assessment, interpretation, calculation, and reporting of coder agreement. We also briefly discuss theoretical implications for the concept of annotation quality.
Journal Articles
Publisher: Journals Gateway
Computational Linguistics (2007) 33 (1): 3–8.
Published: 01 March 2007
Abstract
View article
PDF
Many annotation projects have shown that the quality of manual annotations often is not as good as would be desirable for reliable data analysis. Identifying the main sources responsible for poor annotation quality must thus be a major concern. Generalizability theory is a valuable tool for this purpose, because it allows for the differentiation and detailed analysis of factors that influence annotation quality. In this article we will present basic concepts of Generalizability Theory and give an example for its application based on published data.