Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/1914
DC FieldValueLanguage
dc.contributor.authorKarpouzis, Kostas-
dc.contributor.authorTsapatsoulis, Nicolas-
dc.contributor.authorRaouzaiou, Amaryllis-
dc.contributor.authorMoshovitis, George-
dc.contributor.authorKollias, Stefanos D.-
dc.contributor.otherΚαρπούζης, Κώστας-
dc.contributor.otherΡαουζαίου, Αμαρυλλίς-
dc.contributor.otherΜοσχοβίτης, Γιώργος-
dc.contributor.otherΚόλλιας, Στέφανος Δ.-
dc.contributor.otherΤσαπατσούλης, Νικόλας-
dc.date.accessioned2009-05-26T12:38:44Zen
dc.date.accessioned2013-05-16T13:11:02Z-
dc.date.accessioned2015-12-02T09:39:36Z-
dc.date.available2009-05-26T12:38:44Zen
dc.date.available2013-05-16T13:11:02Z-
dc.date.available2015-12-02T09:39:36Z-
dc.date.issued2000-06-
dc.identifier.citationACM SIGCAPH Computers and the Physically Handicapped, 2000, no. 67, pp.1-9en_US
dc.identifier.issn1635727-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/1914-
dc.description.abstractThis paper describes an integrated system for human emotion recognition, which is used to provide feedback about the relevance or impact of the information that is presented to the user. Other techniques in this field extract explicit motion fields from the areas of interest and classify them with the help of templates or training sets; the proposed system, however, compares indication of muscle activation from the human face to data taken from similar actions of a 3-d head model. This comparison takes place at curve level, with each curve being drawn from detected feature points in an image sequence or from selected vertices of the polygonal model. The result of this process is identification of the muscles that contribute to the detected motion; this conclusion can then be used in conjunction with the Mimic Language, a table structure that maps groups of muscles to emotions. This method can be applied to either frontal or rotated views, as the curves that are calculated are easier to rotate in 3-d space than motion vector fields. The notion of describing motion with specific points is also supported in MPEG-4 and the relevant encoded data can be used in the same context, to eliminate the need to use machine vision techniques.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.rights© Association for Computing Machineryen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectHuman-Computer interactionen_US
dc.titleEnhancing Nonverbal Human Computer Interaction with Expression Recognitionen_US
dc.typeConference Papersen_US
dc.collaborationNational Technical University Of Athensen_US
dc.subject.categoryENGINEERING AND TECHNOLOGYen_US
dc.journalsSubscriptionen_US
dc.countryGreeceen_US
dc.subject.fieldEngineering and Technologyen_US
dc.relation.conferenceACM SIGCAPH Computers and the Physically Handicappeden_US
dc.identifier.doi10.1145/569244.569245en_US
dc.dept.handle123456789/54en
dc.relation.issue67en_US
cut.common.academicyear2000-2001en_US
dc.identifier.spage1en_US
dc.identifier.epage9en_US
item.fulltextNo Fulltext-
item.cerifentitytypePublications-
item.grantfulltextnone-
item.openairecristypehttp://purl.org/coar/resource_type/c_c94f-
item.openairetypeconferenceObject-
item.languageiso639-1en-
crisitem.author.deptDepartment of Communication and Marketing-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.orcid0000-0002-6739-8602-
crisitem.author.parentorgFaculty of Communication and Media Studies-
Appears in Collections:Άρθρα/Articles
CORE Recommender
Show simple item record

Page view(s) 5

592
Last Week
2
Last month
10
checked on May 11, 2024

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons