Please use this identifier to cite or link to this item: http://ktisis.cut.ac.cy/handle/10488/4005
DC FieldValueLanguage
dc.contributor.authorKarpouzis, Kostas-
dc.contributor.authorRaouzaiou, Amaryllis-
dc.contributor.authorTsapatsoulis, Nicolas-
dc.contributor.authorKollias, Stefanos D.-
dc.contributor.otherΤσαπατσούλης, Νικόλας-
dc.date.accessioned2015-02-04T15:40:36Z-
dc.date.accessioned2015-12-02T11:51:43Z-
dc.date.available2015-02-04T15:40:36Z-
dc.date.available2015-12-02T11:51:43Z-
dc.date.issued2003-
dc.identifier.citation7th International Conference on Telecommunications, 2003, Zagreb, Croatia, 11-13 Juneen,el
dc.identifier.isbn953-184-052-0-
dc.identifier.urihttp://ktisis.cut.ac.cy/handle/10488/4005-
dc.descriptionBook title: Proceedings of the 7th International Conference on Telecommunicationsen,el
dc.description.abstractResearch on networked applications that utilize multimodal information about their users' current emotional state are presently at the forefront of interest of the computer vision and artificial intelligence communities. Human faces may act as visual interfaces that help users feel at home when interacting with a computer because they are accepted as the most expressive means for communicating and recognizing emotions. Thus, a lifelike human face can enhance interactive applications by providing straightforward feedback to and from the users and stimulating emotional responses from them. Thus, virtual environments can employ believable, expressive characters since such features significantly enhance the atmosphere of a virtual world and Communicate messages far more vividly than any textual or speech information. In this paper, we present an abstract means of description of facial expressions, by utilizing concepts included in the MPEG-4 standard. Furthermore, we exploit these concepts to synthesize a wide variety of expressions using a reduced representation, suitable for networked and lightweight applications.en,el
dc.formatpdfen,el
dc.language.isoenen,el
dc.publisherIEEEen,el
dc.subjectEmolional representationen,el
dc.subjectMPEG-4en,el
dc.subjectNetworked virtual environmenlsen,el
dc.subjectAvatarsen,el
dc.subjectExpression synthesisen,el
dc.titleEmotion representation for virtual environmentsen,el
dc.typeConference Papersen,el
dc.collaborationNational Technical University Of Athens-
dc.subject.categoryElectrical Engineering - Electronic Engineering - Information Engineeringen,el
dc.reviewPeer Revieweden,el
dc.countryGreece-
dc.subject.fieldEngineering and Technologyen,el
dc.identifier.doi10.1109/CONTEL.2003.176934en,el
dc.dept.handle123456789/54en
Appears in Collections:Δημοσιεύσεις σε συνέδρια/Conference papers
Files in This Item:
File Description SizeFormat 
Tsapatsoulis_2003_3.pdf693.17 kBAdobe PDFView/Open
Show simple item record

SCOPUSTM   
Citations 50

1
checked on Dec 9, 2017

Page view(s)

15
Last Week
0
Last month
1
checked on Dec 14, 2017

Download(s) 50

4
checked on Dec 14, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.