Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/3442
DC FieldValueLanguage
dc.contributor.authorTsapatsoulis, Nicolas-
dc.contributor.authorNtalianis, Klimis S.-
dc.contributor.authorDoulamis, Anastasios D.-
dc.contributor.authorMatsatsinis, Nikolaos F.-
dc.date.accessioned2015-02-04T15:38:53Z-
dc.date.accessioned2015-12-08T09:13:57Z-
dc.date.available2015-02-04T15:38:53Z-
dc.date.available2015-12-08T09:13:57Z-
dc.date.issued2014-03-
dc.identifier.citationMultimedia Tools and Applications, 2014, vol. 69, no. 2, pp. 397-421en_US
dc.identifier.issn15737721-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/3442-
dc.description.abstractIn this paper a novel approach for automatically annotating image databases is proposed. Despite most current schemes that are just based on spatial content analysis, the proposed method properly combines several innovative modules for semantically annotating images. In particular it includes: (a) a GWAP-oriented interface for optimized collection of implicit crowdsourcing data, (b) a new unsupervised visual concept modeling algorithm for content description and (c) a hierarchical visual content display method for easy data navigation, based on graph partitioning. The proposed scheme can be easily adopted by any multimedia search engine, providing an intelligent way to even annotate completely non-annotated content or correct wrongly annotated images. The proposed approach currently provides very interesting results in limited-size both standard and generic datasets and it is expected to add significant value especially to billions of non-annotated images existing in the Web. Furthermore expert annotators can gain important knowledge relevant to user new trends, language idioms and styles of searching.en_US
dc.language.isoenen_US
dc.relation.ispartofMultimedia Tools and Applicationsen_US
dc.rights© Springer Natureen_US
dc.subjectImplicit crowdsourcingen_US
dc.subjectUser feedbacken_US
dc.subjectVisual concept modelingen_US
dc.subjectClickthrough dataen_US
dc.subjectAutomatic image annotationen_US
dc.titleAutomatic annotation of image databases based on implicit crowdsourcing, visual concept modeling and evolutionen_US
dc.typeArticleen_US
dc.collaborationCyprus University of Technologyen_US
dc.collaborationUniversity of West Atticaen_US
dc.collaborationTechnical University of Creteen_US
dc.subject.categoryMedia and Communicationsen_US
dc.journalsSubscriptionen_US
dc.reviewPeer Revieweden
dc.countryCyprusen_US
dc.countryGreeceen_US
dc.subject.fieldSocial Sciencesen_US
dc.identifier.doi10.1007/s11042-012-0995-2en_US
dc.dept.handle123456789/100en
dc.relation.issue2en_US
dc.relation.volume69en_US
cut.common.academicyear2013-2014en_US
dc.identifier.spage397en_US
dc.identifier.epage421en_US
item.fulltextNo Fulltext-
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.openairetypearticle-
item.grantfulltextnone-
item.languageiso639-1en-
item.cerifentitytypePublications-
crisitem.journal.journalissn1573-7721-
crisitem.journal.publisherSpringer Nature-
crisitem.author.deptDepartment of Communication and Marketing-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.orcid0000-0002-6739-8602-
crisitem.author.parentorgFaculty of Communication and Media Studies-
Appears in Collections:Άρθρα/Articles
CORE Recommender
Show simple item record

SCOPUSTM   
Citations

15
checked on Nov 9, 2023

WEB OF SCIENCETM
Citations

13
Last Week
0
Last month
0
checked on Oct 29, 2023

Page view(s)

538
Last Week
8
Last month
7
checked on Feb 17, 2025

Google ScholarTM

Check

Altmetric


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.