Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/2542
DC FieldValueLanguage
dc.contributor.authorTsapatsoulis, Nicolasen
dc.contributor.authorKarpouzis, Kostasen
dc.contributor.authorKollias, Stefanos D.en
dc.contributor.otherΤσαπατσούλης, Νικόλαςen
dc.date.accessioned2013-02-07T14:02:18Zen
dc.date.accessioned2013-05-16T13:33:05Z-
dc.date.accessioned2015-12-02T11:35:23Z-
dc.date.available2013-02-07T14:02:18Zen
dc.date.available2013-05-16T13:33:05Z-
dc.date.available2015-12-02T11:35:23Z-
dc.date.issued2000en
dc.identifier.citationHuman vision and electronic imaging, 24-27 January 2000, San Jose, CA, USAen
dc.identifier.urihttps://hdl.handle.net/20.500.14279/2542-
dc.description.abstractResearch in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequencesen
dc.language.isoenen
dc.rights© SPIEen
dc.subjectFacial expressionen
dc.subjectFuzzy setsen
dc.subjectMathematical modelsen
dc.titleMoving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) seten
dc.typeConference Papersen
dc.collaborationNational Technical University Of Athens-
dc.countryGreece-
dc.identifier.doi10.1117/12.387182en
dc.dept.handle123456789/54en
item.openairetypeconferenceObject-
item.grantfulltextnone-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_c94f-
item.languageiso639-1en-
item.fulltextNo Fulltext-
crisitem.author.deptDepartment of Communication and Marketing-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.orcid0000-0002-6739-8602-
crisitem.author.parentorgFaculty of Communication and Media Studies-
Appears in Collections:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation
CORE Recommender
Show simple item record

Page view(s) 10

537
Last Week
0
Last month
5
checked on Dec 4, 2024

Google ScholarTM

Check

Altmetric


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.