Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/1939
DC FieldValueLanguage
dc.contributor.authorRapantzikos, Konstantinos-
dc.contributor.authorTsapatsoulis, Nicolas-
dc.contributor.authorAvrithis, Yannis-
dc.contributor.authorKollias, Stefanos D.-
dc.date.accessioned2009-05-26T06:38:20Zen
dc.date.accessioned2013-05-16T13:11:07Z-
dc.date.accessioned2015-12-02T09:40:40Z-
dc.date.available2009-05-26T06:38:20Zen
dc.date.available2013-05-16T13:11:07Z-
dc.date.available2015-12-02T09:40:40Z-
dc.date.issued2007-
dc.identifier.citationImage Processing, IET, 2007, vol. 1, no. 2, pp. 237-248.en_US
dc.identifier.issn17519667-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/1939-
dc.descriptionResearch Paperen_US
dc.description.abstractThe human visual system (HVS) has the ability to fixate quickly on the most informative (salient) regions of a scene and therefore reducing the inherent visual uncertainty. Computational visual attention (VA) schemes have been proposed to account for this important characteristic of the HVS. A video analysis framework based on a spatiotemporal VA model is presented. A novel scheme has been proposed for generating saliency in video sequences by taking into account both the spatial extent and dynamic evolution of regions. To achieve this goal, a common, image-oriented computational model of saliency-based visual attention is extended to handle spatiotemporal analysis of video in a volumetric framework. The main claim is that attention acts as an efficient preprocessing step to obtain a compact representation of the visual content in the form of salient events/objects. The model has been implemented, and qualitative as well as quantitative examples illustrating its performance are shown.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.relation.ispartofIEEE Transactions on Image Processingen_US
dc.rights© IEEEen_US
dc.subjectVideo signal processingen_US
dc.subjectVideo signal processingen_US
dc.subjectImage-oriented computational modelen_US
dc.subjectImage sequencesen_US
dc.titleBottom-up spatiotemporal visual attention model for video analysisen_US
dc.typeArticleen_US
dc.collaborationNational Technical University Of Athensen_US
dc.collaborationUniversity of Cyprusen_US
dc.journalsSubscriptionen_US
dc.countryGreeceen_US
dc.subject.fieldEngineering and Technologyen_US
dc.publicationPeer Revieweden_US
dc.identifier.doi10.1049/iet-ipr:20060040en_US
dc.dept.handle123456789/54en
dc.relation.issue2en_US
dc.relation.volume1en_US
cut.common.academicyear2007-2008en_US
dc.identifier.spage237en_US
dc.identifier.epage248en_US
item.fulltextNo Fulltext-
item.cerifentitytypePublications-
item.grantfulltextnone-
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.openairetypearticle-
item.languageiso639-1en-
crisitem.author.deptDepartment of Communication and Marketing-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.orcid0000-0002-6739-8602-
crisitem.author.parentorgFaculty of Communication and Media Studies-
Appears in Collections:Άρθρα/Articles
CORE Recommender
Show simple item record

SCOPUSTM   
Citations

38
checked on Nov 9, 2023

WEB OF SCIENCETM
Citations 50

28
Last Week
0
Last month
0
checked on Oct 17, 2023

Page view(s)

608
Last Week
2
Last month
8
checked on May 12, 2024

Google ScholarTM

Check

Altmetric


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.