Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/3957
DC FieldValueLanguage
dc.contributor.authorTsapatsoulis, Nicolas-
dc.contributor.authorRapantzikos, Konstantinos-
dc.contributor.authorAvrithis, Yannis-
dc.contributor.authorKollias, Stefanos D.-
dc.date.accessioned2009-05-25T07:18:49Zen
dc.date.accessioned2013-05-17T09:55:30Z-
dc.date.accessioned2015-12-09T10:25:43Z-
dc.date.available2009-05-25T07:18:49Zen
dc.date.available2013-05-17T09:55:30Z-
dc.date.available2015-12-09T10:25:43Z-
dc.date.issued2009-08-
dc.identifier.citationSignal Processing: Image Communication, 2009, vol. 24, no. 7, pp. 557–571en_US
dc.identifier.issn09235965-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/3957-
dc.description.abstractComputer vision applications often need to process only a representative part of the visual input rather than the whole image/sequence. Considerable research has been carried out into salient region detection methods based either on models emulating human visual attention (VA) mechanisms or on computational approximations. Most of the proposed methods are bottom-up and their major goal is to filter out redundant visual information. In this paper, we propose and elaborate on a saliency detection model that treats a video sequence as a spatiotemporal volume and generates a local saliency measure for each visual unit (voxel). This computation involves an optimization process incorporating inter- and intra-feature competition at the voxel level. Perceptual decomposition of the input, spatiotemporal center-surround interactions and the integration of heterogeneous feature conspicuity values are described and an experimental framework for video classification is set up. This framework consists of a series of experiments that shows the effect of saliency in classification performance and let us draw conclusions on how well the detected salient regions represent the visual input. A comparison is attempted that shows the potential of the proposed method.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.relation.ispartofSignal Processing: Image Communicationen_US
dc.rights© Elsevieren_US
dc.subjectSpatiotemporal visual saliencyen_US
dc.subjectVideo classificationen_US
dc.titleSpatiotemporal saliency for video classificationen_US
dc.typeArticleen_US
dc.collaborationCyprus University of Technologyen_US
dc.collaborationNational Technical University Of Athensen_US
dc.subject.categoryComputer and Information Sciencesen_US
dc.journalsSubscriptionen_US
dc.reviewPeer Reviewed-
dc.countryGreeceen_US
dc.countryCyprusen_US
dc.subject.fieldNatural Sciencesen_US
dc.publicationPeer Revieweden_US
dc.identifier.doi10.1016/j.image.2009.03.002en_US
dc.dept.handle123456789/100en
dc.relation.issue7en_US
dc.relation.volume24en_US
cut.common.academicyear2008-2009en_US
dc.identifier.spage557en_US
dc.identifier.epage571en_US
item.fulltextNo Fulltext-
item.cerifentitytypePublications-
item.grantfulltextnone-
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.openairetypearticle-
item.languageiso639-1en-
crisitem.journal.journalissn0923-5965-
crisitem.journal.publisherElsevier-
crisitem.author.deptDepartment of Communication and Marketing-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.orcid0000-0002-6739-8602-
crisitem.author.parentorgFaculty of Communication and Media Studies-
Appears in Collections:Άρθρα/Articles
CORE Recommender
Show simple item record

SCOPUSTM   
Citations

26
checked on Nov 9, 2023

WEB OF SCIENCETM
Citations 50

24
Last Week
0
Last month
0
checked on Oct 29, 2023

Page view(s) 20

467
Last Week
6
Last month
10
checked on May 9, 2024

Google ScholarTM

Check

Altmetric


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.