Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.14279/1939
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Rapantzikos, Konstantinos | - |
dc.contributor.author | Tsapatsoulis, Nicolas | - |
dc.contributor.author | Avrithis, Yannis | - |
dc.contributor.author | Kollias, Stefanos D. | - |
dc.date.accessioned | 2009-05-26T06:38:20Z | en |
dc.date.accessioned | 2013-05-16T13:11:07Z | - |
dc.date.accessioned | 2015-12-02T09:40:40Z | - |
dc.date.available | 2009-05-26T06:38:20Z | en |
dc.date.available | 2013-05-16T13:11:07Z | - |
dc.date.available | 2015-12-02T09:40:40Z | - |
dc.date.issued | 2007 | - |
dc.identifier.citation | Image Processing, IET, 2007, vol. 1, no. 2, pp. 237-248. | en_US |
dc.identifier.issn | 17519667 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.14279/1939 | - |
dc.description | Research Paper | en_US |
dc.description.abstract | The human visual system (HVS) has the ability to fixate quickly on the most informative (salient) regions of a scene and therefore reducing the inherent visual uncertainty. Computational visual attention (VA) schemes have been proposed to account for this important characteristic of the HVS. A video analysis framework based on a spatiotemporal VA model is presented. A novel scheme has been proposed for generating saliency in video sequences by taking into account both the spatial extent and dynamic evolution of regions. To achieve this goal, a common, image-oriented computational model of saliency-based visual attention is extended to handle spatiotemporal analysis of video in a volumetric framework. The main claim is that attention acts as an efficient preprocessing step to obtain a compact representation of the visual content in the form of salient events/objects. The model has been implemented, and qualitative as well as quantitative examples illustrating its performance are shown. | en_US |
dc.format | en_US | |
dc.language.iso | en | en_US |
dc.relation.ispartof | IEEE Transactions on Image Processing | en_US |
dc.rights | © IEEE | en_US |
dc.subject | Video signal processing | en_US |
dc.subject | Video signal processing | en_US |
dc.subject | Image-oriented computational model | en_US |
dc.subject | Image sequences | en_US |
dc.title | Bottom-up spatiotemporal visual attention model for video analysis | en_US |
dc.type | Article | en_US |
dc.collaboration | National Technical University Of Athens | en_US |
dc.collaboration | University of Cyprus | en_US |
dc.journals | Subscription | en_US |
dc.country | Greece | en_US |
dc.subject.field | Engineering and Technology | en_US |
dc.publication | Peer Reviewed | en_US |
dc.identifier.doi | 10.1049/iet-ipr:20060040 | en_US |
dc.dept.handle | 123456789/54 | en |
dc.relation.issue | 2 | en_US |
dc.relation.volume | 1 | en_US |
cut.common.academicyear | 2007-2008 | en_US |
dc.identifier.spage | 237 | en_US |
dc.identifier.epage | 248 | en_US |
item.grantfulltext | none | - |
item.languageiso639-1 | en | - |
item.cerifentitytype | Publications | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.openairetype | article | - |
item.fulltext | No Fulltext | - |
crisitem.author.dept | Department of Communication and Marketing | - |
crisitem.author.faculty | Faculty of Communication and Media Studies | - |
crisitem.author.orcid | 0000-0002-6739-8602 | - |
crisitem.author.parentorg | Faculty of Communication and Media Studies | - |
Appears in Collections: | Άρθρα/Articles |
CORE Recommender
SCOPUSTM
Citations
38
checked on Nov 9, 2023
WEB OF SCIENCETM
Citations
50
28
Last Week
0
0
Last month
0
0
checked on Oct 17, 2023
Page view(s)
633
Last Week
0
0
Last month
2
2
checked on Nov 6, 2024
Google ScholarTM
Check
Altmetric
Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.