Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/3560
DC FieldValueLanguage
dc.contributor.authorTsapatsoulis, Nicolasen
dc.contributor.authorKounoudes, Anastasisen
dc.contributor.authorTheodosiou, Zenonas-
dc.contributor.otherΚουνούδης, Αναστάσιος-
dc.contributor.otherΤσαπατσούλης, Νικόλας-
dc.contributor.otherΘεοδοσίου, Ζήνωνας-
dc.date.accessioned2013-02-07T13:49:08Zen
dc.date.accessioned2013-05-17T10:11:46Z-
dc.date.accessioned2015-12-08T10:53:35Z-
dc.date.available2013-02-07T13:49:08Zen
dc.date.available2013-05-17T10:11:46Z-
dc.date.available2015-12-08T10:53:35Z-
dc.date.issued2009en
dc.identifier.citationArtificial neural networks – ICANN 2009: 19th international conference, Limassol, Cyprus, September 14-17, 2009, Proceedings, Part II, Pages 913-922en
dc.identifier.isbn978-3-642-04276-8 (print)en
dc.identifier.issn978-3-642-04277-5 (online)en
dc.identifier.urihttps://hdl.handle.net/20.500.14279/3560-
dc.description.abstractRecent advances in digital video technology have resulted in an explosion of digital video data which are available through the Web or in private repositories. Efficient searching in these repositories created the need of semantic labeling of video data at various levels of granularity, i.e., movie, scene, shot, keyframe, video object, etc. Through multilevel labeling video content is appropriately indexed, allowing access from various modalities and for a variety of applications. However, despite the huge efforts for automatic video annotation human intervention is the only way for reliable semantic video annotation. Manual video annotation is an extremely laborious process and efficient tools developed for this purpose can make, in many cases, the true difference. In this paper we present a video annotation tool, which uses structured knowledge, in the form of XML dictionaries, combined with a hierarchical classification scheme to attach semantic labels to video segments at various level of granularity. Video segmentation is supported through the use of an efficient shot detection algorithm; while shots are combined into scenes through clustering with the aid of a Genetic Algorithm scheme. Finally, XML dictionary creation and editing tools are available during annotation allowing the user to always use the semantic label she/he wishes instead of the automatically created onesen
dc.formatpdfen
dc.language.isoenen
dc.rights© 2009 Springer Berlin Heidelbergen
dc.subjectComputer scienceen
dc.subjectBack propagation (Artificial intelligence)en
dc.subjectComputer graphicsen
dc.subjectMultimedia systemsen
dc.subjectNeural networksen
dc.subjectSemanticsen
dc.subjectVideo recordingen
dc.subjectXML (Document markup language)en
dc.titleMuLVAT: a video annotation tool based on XML-dictionaries and shot clusteringen
dc.typeBook Chapteren
dc.collaborationCyprus University of Technology-
dc.subject.categoryMedia and Communications-
dc.countryCyprus-
dc.subject.fieldSocial Sciences-
dc.identifier.doi10.1007/978-3-642-04277-5_92en
dc.dept.handle123456789/100en
item.languageiso639-1en-
item.openairecristypehttp://purl.org/coar/resource_type/c_3248-
item.fulltextNo Fulltext-
item.grantfulltextnone-
item.openairetypebookPart-
item.cerifentitytypePublications-
crisitem.author.deptDepartment of Communication and Marketing-
crisitem.author.deptDepartment of Communication and Internet Studies-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.orcid0000-0002-6739-8602-
crisitem.author.orcid0000-0003-3168-2350-
crisitem.author.parentorgFaculty of Communication and Media Studies-
crisitem.author.parentorgFaculty of Communication and Media Studies-
Appears in Collections:Κεφάλαια βιβλίων/Book chapters
CORE Recommender
Show simple item record

SCOPUSTM   
Citations 50

7
checked on Nov 8, 2023

Page view(s) 50

571
Last Week
2
Last month
8
checked on Jul 25, 2024

Google ScholarTM

Check

Altmetric


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.