Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/2625
DC FieldValueLanguage
dc.contributor.authorKarpouzis, Kostas-
dc.contributor.authorRaouzaiou, Amaryllis-
dc.contributor.authorTsapatsoulis, Nicolas-
dc.contributor.authorKollias, Stefanos D.-
dc.contributor.otherΤσαπατσούλης, Νικόλας-
dc.date.accessioned2015-02-04T15:40:36Z-
dc.date.accessioned2015-12-02T11:51:43Z-
dc.date.available2015-02-04T15:40:36Z-
dc.date.available2015-12-02T11:51:43Z-
dc.date.issued2003-
dc.identifier.citation7th International Conference on Telecommunications, 2003, Zagreb, Croatia, 11-13 Juneen
dc.identifier.isbn953-184-052-0-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/2625-
dc.descriptionBook title: Proceedings of the 7th International Conference on Telecommunicationsen
dc.description.abstractResearch on networked applications that utilize multimodal information about their users' current emotional state are presently at the forefront of interest of the computer vision and artificial intelligence communities. Human faces may act as visual interfaces that help users feel at home when interacting with a computer because they are accepted as the most expressive means for communicating and recognizing emotions. Thus, a lifelike human face can enhance interactive applications by providing straightforward feedback to and from the users and stimulating emotional responses from them. Thus, virtual environments can employ believable, expressive characters since such features significantly enhance the atmosphere of a virtual world and Communicate messages far more vividly than any textual or speech information. In this paper, we present an abstract means of description of facial expressions, by utilizing concepts included in the MPEG-4 standard. Furthermore, we exploit these concepts to synthesize a wide variety of expressions using a reduced representation, suitable for networked and lightweight applications.en
dc.formatpdfen
dc.language.isoenen
dc.subjectEmolional representationen
dc.subjectMPEG-4en
dc.subjectNetworked virtual environmenlsen
dc.subjectAvatarsen
dc.subjectExpression synthesisen
dc.titleEmotion representation for virtual environmentsen
dc.typeConference Papersen
dc.collaborationNational Technical University Of Athens-
dc.subject.categoryElectrical Engineering - Electronic Engineering - Information Engineeringen
dc.reviewPeer Revieweden
dc.countryGreece-
dc.subject.fieldEngineering and Technologyen
dc.identifier.doi10.1109/CONTEL.2003.176934en
dc.dept.handle123456789/54en
item.languageiso639-1en-
item.cerifentitytypePublications-
item.fulltextWith Fulltext-
item.grantfulltextopen-
item.openairetypeconferenceObject-
item.openairecristypehttp://purl.org/coar/resource_type/c_c94f-
crisitem.author.deptDepartment of Communication and Marketing-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.orcid0000-0002-6739-8602-
crisitem.author.parentorgFaculty of Communication and Media Studies-
Appears in Collections:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation
Files in This Item:
File Description SizeFormat
Tsapatsoulis_2003_3.pdf693.17 kBAdobe PDFView/Open
CORE Recommender
Show simple item record

SCOPUSTM   
Citations

1
checked on Nov 6, 2023

Page view(s) 10

559
Last Week
1
Last month
4
checked on Oct 4, 2024

Download(s) 5

452
checked on Oct 4, 2024

Google ScholarTM

Check

Altmetric


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.