Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/2656
DC FieldValueLanguage
dc.contributor.authorTsapatsoulis, Nicolas-
dc.contributor.authorPattichis, Constantinos S.-
dc.contributor.authorKounoudes, Anastasis-
dc.contributor.authorLoizou,  Christos P.-
dc.contributor.authorConstantinides, Anthony G.-
dc.contributor.authorTaylor, John G.-
dc.date.accessioned2015-02-05T07:17:05Z-
dc.date.accessioned2015-12-02T12:00:20Z-
dc.date.available2015-02-05T07:17:05Z-
dc.date.available2015-12-02T12:00:20Z-
dc.date.issued2006-
dc.identifier.citation5th International Symposium on Communication Systems, Networks and Digital Signal Processing, 2006, Patras, Greece,en_US
dc.identifier.urihttps://hdl.handle.net/20.500.14279/2656-
dc.description.abstractBottom up approaches to Visual Attention (VA) have been applied successfully in a variety of applications, where no domain information exists, e.g. general purpose image and video segmentation. On the other hand, when humans are looking for faces in a scene they perform an implicit conscious search. Therefore, using simple bottom up approaches for identifying visually salient areas in scenes containing humans are not so efficient. In this paper we introduce the inclusion of a top-down channel in the VA architecture proposed in the past (i.e., Itti et al) to account for conscious search in video telephony applications. In such kind of applications the existence of human faces is almost always guaranteed. The regions, in the video-telephony stream, identified by the proposed algorithm as being visually salient are encoded with higher precision compared to the remaining ones. This procedure leads to a significant bit-rate reduction while the visual quality of the VA based encoded video stream is only slightly deteriorated, as the visual trial tests show. Furthermore, extended experiments concerning both static images as well as low-quality video show the efficiency of the proposed method, as far as the compression ratios achieved is concerned. The comparisons are made against standard JPEG and MPEG-1 encoding respectively.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.subjectVisual attentionen_US
dc.subjectVideo telephony applicationsen_US
dc.subjectAlgorithmen_US
dc.titleVisual attention based region of interest coding for video-telephony applicationsen_US
dc.typeConference Papersen_US
dc.collaborationUniversity of Cyprusen_US
dc.collaborationPhilips Collegeen_US
dc.collaborationIntercollegeen_US
dc.collaborationImperial College Londonen_US
dc.collaborationKing's College Londonen_US
dc.subject.categoryElectrical Engineering - Electronic Engineering - Information Engineeringen_US
dc.reviewPeer Revieweden
dc.countryUnited Kingdomen_US
dc.countryCyprusen_US
dc.subject.fieldEngineering and Technologyen_US
dc.publicationPeer Revieweden_US
dc.dept.handle123456789/54en
cut.common.academicyearemptyen_US
local.message.claim2021-02-18T11:13:32.053+0200|||rp04067|||submit_approve|||dc_contributor_author|||None*
item.grantfulltextopen-
item.openairetypeconferenceObject-
item.openairecristypehttp://purl.org/coar/resource_type/c_c94f-
item.cerifentitytypePublications-
item.fulltextWith Fulltext-
item.languageiso639-1en-
crisitem.author.deptDepartment of Communication and Marketing-
crisitem.author.deptDepartment of Electrical Engineering, Computer Engineering and Informatics-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.facultyFaculty of Engineering and Technology-
crisitem.author.orcid0000-0002-6739-8602-
crisitem.author.orcid0000-0003-1247-8573-
crisitem.author.parentorgFaculty of Communication and Media Studies-
crisitem.author.parentorgFaculty of Engineering and Technology-
Appears in Collections:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation
Files in This Item:
File Description SizeFormat
Tsapatsoulis_2006_1.pdf316.99 kBAdobe PDFView/Open
CORE Recommender
Show simple item record

Page view(s) 20

449
Last Week
4
Last month
26
checked on Apr 21, 2024

Download(s) 50

74
checked on Apr 21, 2024

Google ScholarTM

Check


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.