Please use this identifier to cite or link to this item: http://ktisis.cut.ac.cy/handle/10488/91
Title: Bottom-up spatiotemporal visual attention model for video analysis
Authors: Rapantzikos, Konstantinos 
Tsapatsoulis, Nicolas 
Avrithis, Yannis 
Kollias, Stefanos D. 
Keywords: Video signal processing
Video signal processing
Image-oriented computational model
Image sequences
Issue Date: 2007
Publisher: EEE Signal Processing Society
Source: Image Processing, IET, Vol. 1, no. 2, 2007, pp. 237-248
Abstract: The human visual system (HVS) has the ability to fixate quickly on the most informative (salient) regions of a scene and therefore reducing the inherent visual uncertainty. Computational visual attention (VA) schemes have been proposed to account for this important characteristic of the HVS. A video analysis framework based on a spatiotemporal VA model is presented. A novel scheme has been proposed for generating saliency in video sequences by taking into account both the spatial extent and dynamic evolution of regions. To achieve this goal, a common, image-oriented computational model of saliency-based visual attention is extended to handle spatiotemporal analysis of video in a volumetric framework. The main claim is that attention acts as an efficient preprocessing step to obtain a compact representation of the visual content in the form of salient events/objects. The model has been implemented, and qualitative as well as quantitative examples illustrating its performance are shown.
Description: Research Paper
URI: http://ktisis.cut.ac.cy/handle/10488/91
ISSN: 1751-9667
DOI: 10.1049/iet-ipr:20060040
Rights: EEE Signal Processing Society
Appears in Collections:Άρθρα/Articles

Show full item record

SCOPUSTM   
Citations 10

28
checked on Aug 19, 2017

WEB OF SCIENCETM
Citations 10

22
checked on Aug 14, 2017

Page view(s) 5

63
Last Week
3
Last month
8
checked on Aug 22, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.