Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/19280
DC FieldValueLanguage
dc.contributor.authorTheodosiou, Zenonas-
dc.contributor.authorTsapatsoulis, Nicolas-
dc.date.accessioned2020-10-27T10:40:23Z-
dc.date.available2020-10-27T10:40:23Z-
dc.date.issued2020-09-
dc.identifier.citationInternational Journal of Multimedia Information Retrieval, 2020, vol. 9, no. 3, pp. 191–203en_US
dc.identifier.issn2192662X-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/19280-
dc.description.abstractImage annotation is the process of assigning metadata to images, allowing effective retrieval by text-based search techniques. Despite the lots of efforts in automatic multimedia analysis, automatic semantic annotation of multimedia is still inefficient due to the problems in modeling high-level semantic terms. In this paper, we examine the factors affecting the quality of annotations collected through crowdsourcing platforms. An image dataset was manually annotated utilizing: (1) a vocabulary consists of preselected set of keywords, (2) an hierarchical vocabulary and (3) free keywords. The results show that the annotation quality is affected by the image content itself and the used lexicon. As we expected while annotation using the hierarchical vocabulary is more representative, the use of free keywords leads to increased invalid annotation. Finally, it is shown that images requiring annotations that are not directly related to their content (i.e., annotation using abstract concepts) lead to accrue annotator inconsistency revealing in that way the difficulty in annotating such kind of images is not limited to automatic annotation, but it is a generic problem of annotation.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.relation.ispartofInternational Journal of Multimedia Information Retrievalen_US
dc.rights© Springeren_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectAnnotation qualityen_US
dc.subjectCrowdsourcingen_US
dc.subjectImage annotationen_US
dc.subjectManual annotationen_US
dc.titleImage annotation: the effects of content, lexicon and annotation methoden_US
dc.typeArticleen_US
dc.collaborationResearch Center on Interactive Media, Smart Systems and Emerging Technologiesen_US
dc.collaborationCyprus University of Technologyen_US
dc.subject.categoryComputer and Information Sciencesen_US
dc.journalsSubscriptionen_US
dc.countryCyprusen_US
dc.subject.fieldNatural Sciencesen_US
dc.publicationPeer Revieweden_US
dc.identifier.doi10.1007/s13735-020-00193-zen_US
dc.relation.issue3en_US
dc.relation.volume9en_US
cut.common.academicyear2020-2021en_US
dc.identifier.spage191en_US
dc.identifier.epage203en_US
item.grantfulltextnone-
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.fulltextNo Fulltext-
item.languageiso639-1en-
item.cerifentitytypePublications-
item.openairetypearticle-
crisitem.journal.journalissn2192-662X-
crisitem.journal.publisherSpringer Nature-
crisitem.author.deptDepartment of Communication and Internet Studies-
crisitem.author.deptDepartment of Communication and Marketing-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.orcid0000-0003-3168-2350-
crisitem.author.orcid0000-0002-6739-8602-
crisitem.author.parentorgFaculty of Communication and Media Studies-
crisitem.author.parentorgFaculty of Communication and Media Studies-
Appears in Collections:Άρθρα/Articles
CORE Recommender
Show simple item record

SCOPUSTM   
Citations

5
checked on Nov 6, 2023

WEB OF SCIENCETM
Citations

3
Last Week
0
Last month
1
checked on Oct 29, 2023

Page view(s) 50

335
Last Week
2
Last month
9
checked on Dec 22, 2024

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons