Please use this identifier to cite or link to this item: http://ktisis.cut.ac.cy/handle/10488/6896
Title: Wavelet based estimation of saliency maps in visual attention algorithms
Authors: Tsapatsoulis, Nicolas 
Rapantzikos, Konstantinos 
Keywords: Computer science
Algorithms
Computational complexity
Conformal mapping
Mathematical models
Visualization
Neural networks
Issue Date: 2006
Publisher: Springer
Source: Artificial neural networks – ICANN 2006, 16th international conference, Athens, Greece, September 10-14, 2006. Proceedings, Part II, Pages 538-547
Abstract: This paper deals with the problem of saliency map estimation in computational models of visual attention. In particular, we propose a wavelet based approach for efficient computation of the topographic feature maps. Given that wavelets and multiresolution theory are naturally connected the usage of wavelet decomposition for mimicking the center surround process in humans is an obvious choice. However, our proposal goes further. We utilize the wavelet decomposition for inline computation of the features (such as orientation) that are used to create the topographic feature maps. Topographic feature maps are then combined through a sigmoid function to produce the final saliency map. The computational model we use is based on the Feature Integration Theory of Treisman et al and follows the computational philosophy of this theory proposed by Itti et al. A series of experiments, conducted in a video encoding setup, show that the proposed method compares well against other implementations found in the literature both in terms of visual trials and computational complexity
URI: http://ktisis.cut.ac.cy/handle/10488/6896
ISBN: 978-3-540-38871-5 (print)
ISSN: 978-3-540-38873-9 (online)
DOI: 10.1007/11840930_56
Rights: © Springer-Verlag Berlin Heidelberg 2006
Appears in Collections:Κεφάλαια βιβλίων/Book chapters

Show full item record

Page view(s)

13
Last Week
0
Last month
2
checked on Aug 24, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.