Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.14279/2861
Title: | Broadcast news parsing using visual cues: a robust face detection approach | Authors: | Tsapatsoulis, Nicolas Avrithis, Yannis Kollias, Stefanos D. |
metadata.dc.contributor.other: | Τσαπατσούλης, Νικόλας | Keywords: | Face--Identification;Multimedia systems;Color;Skin;Broadcasting | Issue Date: | 2000 | Source: | IEEE international conference on multimedia and expo ICME, 30 July - 2 August 2000, New York, NY | Abstract: | Automatic content-based analysis and indexing of broadcast news recordings or digitized news archives is becoming an important tool in the framework of many multimedia interactive services such as news summarization, browsing, retrieval and news-on-demand (NoD) applications. Existing approaches have achieved high performance in such applications but heavily rely on textual cues such as closed caption tokens and teletext transcripts. We present an efficient technique for temporal segmentation and parsing of news recordings based on visual cues that can either be employed as a stand-alone application for non-closed captioned broadcasts or integrated with audio and textual cues of existing systems. The technique involves robust face detection by means of color segmentation, skin color matching and shape processing, and is able to identify typical news instances like anchor persons, reports and outdoor shots | URI: | https://hdl.handle.net/20.500.14279/2861 | DOI: | 10.1109/ICME.2000.871044 | Type: | Conference Papers | Affiliation : | National Technical University Of Athens |
Appears in Collections: | Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
ICME-00.pdf | 375.35 kB | Adobe PDF | View/Open |
CORE Recommender
Page view(s) 10
540
Last Week
0
0
Last month
4
4
checked on Dec 3, 2024
Download(s) 50
411
checked on Dec 3, 2024
Google ScholarTM
Check
Altmetric
Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.