Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.14279/3442
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tsapatsoulis, Nicolas | - |
dc.contributor.author | Ntalianis, Klimis S. | - |
dc.contributor.author | Doulamis, Anastasios D. | - |
dc.contributor.author | Matsatsinis, Nikolaos F. | - |
dc.date.accessioned | 2015-02-04T15:38:53Z | - |
dc.date.accessioned | 2015-12-08T09:13:57Z | - |
dc.date.available | 2015-02-04T15:38:53Z | - |
dc.date.available | 2015-12-08T09:13:57Z | - |
dc.date.issued | 2014-03 | - |
dc.identifier.citation | Multimedia Tools and Applications, 2014, vol. 69, no. 2, pp. 397-421 | en_US |
dc.identifier.issn | 15737721 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.14279/3442 | - |
dc.description.abstract | In this paper a novel approach for automatically annotating image databases is proposed. Despite most current schemes that are just based on spatial content analysis, the proposed method properly combines several innovative modules for semantically annotating images. In particular it includes: (a) a GWAP-oriented interface for optimized collection of implicit crowdsourcing data, (b) a new unsupervised visual concept modeling algorithm for content description and (c) a hierarchical visual content display method for easy data navigation, based on graph partitioning. The proposed scheme can be easily adopted by any multimedia search engine, providing an intelligent way to even annotate completely non-annotated content or correct wrongly annotated images. The proposed approach currently provides very interesting results in limited-size both standard and generic datasets and it is expected to add significant value especially to billions of non-annotated images existing in the Web. Furthermore expert annotators can gain important knowledge relevant to user new trends, language idioms and styles of searching. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | Multimedia Tools and Applications | en_US |
dc.rights | © Springer Nature | en_US |
dc.subject | Implicit crowdsourcing | en_US |
dc.subject | User feedback | en_US |
dc.subject | Visual concept modeling | en_US |
dc.subject | Clickthrough data | en_US |
dc.subject | Automatic image annotation | en_US |
dc.title | Automatic annotation of image databases based on implicit crowdsourcing, visual concept modeling and evolution | en_US |
dc.type | Article | en_US |
dc.collaboration | Cyprus University of Technology | en_US |
dc.collaboration | University of West Attica | en_US |
dc.collaboration | Technical University of Crete | en_US |
dc.subject.category | Media and Communications | en_US |
dc.journals | Subscription | en_US |
dc.review | Peer Reviewed | en |
dc.country | Cyprus | en_US |
dc.country | Greece | en_US |
dc.subject.field | Social Sciences | en_US |
dc.identifier.doi | 10.1007/s11042-012-0995-2 | en_US |
dc.dept.handle | 123456789/100 | en |
dc.relation.issue | 2 | en_US |
dc.relation.volume | 69 | en_US |
cut.common.academicyear | 2013-2014 | en_US |
dc.identifier.spage | 397 | en_US |
dc.identifier.epage | 421 | en_US |
item.fulltext | No Fulltext | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.openairetype | article | - |
item.grantfulltext | none | - |
item.languageiso639-1 | en | - |
item.cerifentitytype | Publications | - |
crisitem.journal.journalissn | 1573-7721 | - |
crisitem.journal.publisher | Springer Nature | - |
crisitem.author.dept | Department of Communication and Marketing | - |
crisitem.author.faculty | Faculty of Communication and Media Studies | - |
crisitem.author.orcid | 0000-0002-6739-8602 | - |
crisitem.author.parentorg | Faculty of Communication and Media Studies | - |
Appears in Collections: | Άρθρα/Articles |
CORE Recommender
SCOPUSTM
Citations
15
checked on Nov 9, 2023
WEB OF SCIENCETM
Citations
13
Last Week
0
0
Last month
0
0
checked on Oct 29, 2023
Page view(s)
538
Last Week
8
8
Last month
7
7
checked on Feb 17, 2025
Google ScholarTM
Check
Altmetric
Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.