Please use this identifier to cite or link to this item:
DC FieldValueLanguage
dc.contributor.authorVotsis, George-
dc.contributor.authorTsapatsoulis, Nicolas-
dc.contributor.authorKarpouzis, Kostas-
dc.contributor.authorKollias, Stefanos D.-
dc.contributor.otherΤσαπατσούλης, Νικόλας-
dc.description.abstractAn approach towards simplified models of human faces, given 2D images, is proposed in this paper. The tool involves the utilization of the 2D views -upon which certain protuberant points are automatically detected- and the adaptation of a generic 3D head model (polygon mesh) according to the information gained by the available views. This mesh provides shape information which -combined with texture information- efficiently describes specific 3D human faces. Issues related to luminance differences and rotation variations between the available views, are successfully dealt with within the texture map creation process. A set of localized transformations is also applied, in order to preserve the continuity of the human head surface. Besides this, the problem of using a minimum organic model representation is addressed. The aspect under which this issue is verged upon, is that of the solution of a trade-off problem between low computational complexity and high approximation quality.en
dc.publisherSpringer Londonen
dc.subject2D imagesen
dc.subject3D head modelen
dc.titleA simplified representation of 3D human faces adapted from 2D imagesen
dc.typeConference Papersen
dc.collaborationNational Technical University Of Athens-
dc.subject.categoryComputer and Information Sciencesen
dc.reviewPeer Revieweden
dc.subject.fieldNatural Sciencesen
item.fulltextNo Fulltext-
item.languageiso639-1other- of Communication and Internet Studies- of Communication and Media Studies- of Communication and Media Studies-
Appears in Collections:Δημοσιεύσεις σε συνέδρια/Conference papers
Show simple item record

Page view(s) 1

Last Week
Last month
checked on Sep 19, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.