Zur Kurzanzeige

dc.date.accessioned2022-05-03T06:34:49Z
dc.date.available2022-05-03T06:34:49Z
dc.date.issued2021-12-14
dc.identifierdoi:10.17170/kobra-202205036117
dc.identifier.urihttp://hdl.handle.net/123456789/13801
dc.description.sponsorshipGefördert durch den Publikationsfonds der Universität Kassel
dc.language.isoeng
dc.rightsNamensnennung 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectActive learningeng
dc.subjectclassificationeng
dc.subjecterror-prone annotatorseng
dc.subjecthuman-in-the-loop learningeng
dc.subjectinter- active learningeng
dc.subjectmachine learningeng
dc.subject.ddc004
dc.subject.ddc300
dc.subject.ddc370
dc.titleA Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in Classificationeng
dc.typeAufsatz
dcterms.abstractPool-based active learning (AL) aims to optimize the annotation process (i.e., labeling) as the acquisition of annotations is often time-consuming and therefore expensive. For this purpose, an AL strategy queries annotations intelligently from annotators to train a high-performance classification model at a low annotation cost. Traditional AL strategies operate in an idealized framework. They assume a single, omniscient annotator who never gets tired and charges uniformly regardless of query difficulty. However, in real-world applications, we often face human annotators, e.g., crowd or in-house workers, who make annotation mistakes and can be reluctant to respond if tired or faced with complex queries. Recently, many novel AL strategies have been proposed to address these issues. They differ in at least one of the following three central aspects from traditional AL: 1) modeling of (multiple) human annotators whose performances can be affected by various factors, such as missing expertise; 2) generalization of the interaction with human annotators through different query and annotation types, such as asking an annotator for feedback on an inferred classification rule; 3) consideration of complex cost schemes regarding annotations and misclassifications. This survey provides an overview of these AL strategies and refers to them as real-world AL. Therefore, we introduce a general real-world AL strategy as part of a learning cycle and use its elements, e.g., the query and annotator selection algorithm, to categorize about 60 real-world AL strategies. Finally, we outline possible directions for future research in the field of AL.eng
dcterms.accessRightsopen access
dcterms.creatorHerde, Marek
dcterms.creatorHuseljic, Denis
dcterms.creatorSick, Bernhard
dcterms.creatorCalma, Adrian
dc.relation.doidoi:10.1109/ACCESS.2021.3135514
dc.subject.swdAktives maschinelles Lernenger
dc.subject.swdHuman-in-the-loopger
dc.subject.swdKostenger
dc.subject.swdKlassifikationger
dc.type.versionpublishedVersion
dcterms.source.identifiereissn:2169-3536
dcterms.source.journalIEEE Accesseng
dcterms.source.pageinfo166970-166989
dcterms.source.volumeVolume 9
kup.iskupfalse


Dateien zu dieser Ressource

Thumbnail
Thumbnail

Das Dokument erscheint in:

Zur Kurzanzeige

Namensnennung 4.0 International
Solange nicht anders angezeigt, wird die Lizenz wie folgt beschrieben: Namensnennung 4.0 International