Please use this identifier to cite or link to this item:
Title: Bottom-up spatiotemporal visual attention model for video analysis
Authors: Rapantzikos, Konstantinos 
Tsapatsoulis, Nicolas 
Avrithis, Yannis 
Kollias, Stefanos D. 
Keywords: Video signal processing;Video signal processing;Image-oriented computational model;Image sequences
Issue Date: 2007
Publisher: EEE Signal Processing Society
Source: Image Processing, IET, Vol. 1, no. 2, 2007, pp. 237-248
Abstract: The human visual system (HVS) has the ability to fixate quickly on the most informative (salient) regions of a scene and therefore reducing the inherent visual uncertainty. Computational visual attention (VA) schemes have been proposed to account for this important characteristic of the HVS. A video analysis framework based on a spatiotemporal VA model is presented. A novel scheme has been proposed for generating saliency in video sequences by taking into account both the spatial extent and dynamic evolution of regions. To achieve this goal, a common, image-oriented computational model of saliency-based visual attention is extended to handle spatiotemporal analysis of video in a volumetric framework. The main claim is that attention acts as an efficient preprocessing step to obtain a compact representation of the visual content in the form of salient events/objects. The model has been implemented, and qualitative as well as quantitative examples illustrating its performance are shown.
Description: Research Paper
ISSN: 1751-9667
DOI: 10.1049/iet-ipr:20060040
Rights: EEE Signal Processing Society
Type: Article
Appears in Collections:Άρθρα/Articles

Show full item record

Citations 10

checked on Dec 14, 2018

Citations 50

checked on Jul 12, 2019

Page view(s) 50

Last Week
Last month
checked on Jul 17, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.