Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/2542
Title: Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set
Authors: Tsapatsoulis, Nicolas 
Karpouzis, Kostas 
Kollias, Stefanos D. 
metadata.dc.contributor.other: Τσαπατσούλης, Νικόλας
Keywords: Facial expression;Fuzzy sets;Mathematical models
Issue Date: 2000
Source: Human vision and electronic imaging, 24-27 January 2000, San Jose, CA, USA
Abstract: Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences
URI: https://hdl.handle.net/20.500.14279/2542
DOI: 10.1117/12.387182
Rights: © SPIE
Type: Conference Papers
Affiliation : National Technical University Of Athens 
Appears in Collections:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation

CORE Recommender
Show full item record

Page view(s) 10

537
Last Week
0
Last month
5
checked on Dec 3, 2024

Google ScholarTM

Check

Altmetric


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.