Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/19019
DC FieldValueLanguage
dc.contributor.authorPantraki, Evangelia-
dc.contributor.authorKotropoulos, Constantine L.-
dc.contributor.authorLanitis, Andreas-
dc.date.accessioned2020-09-18T09:00:51Z-
dc.date.available2020-09-18T09:00:51Z-
dc.date.issued2019-04-17-
dc.identifier.citationIEEE International Conference on Acoustics, Speech and Signal Processing, 2019, 12-17 May, Brighton, United Kingdomen_US
dc.identifier.isbn978-1-4799-8131-1-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/19019-
dc.description.abstractHere, face images of a specific age class are translated to images of different age classes in an unsupervised manner that enables training on independent sets of images for each age class. In order to learn pairwise translations between age classes, we adopt the UNsupervised Image-to-image Translation framework that employs Variational AutoEncoders and Generative Adversarial Networks. By mapping face images of different age classes to shared latent representations, the most personalized and abstract facial characteristics are preserved. To effectively diffuse age class information, a pyramid of local, neighbour, and global encoders is employed so that the latent representations progressively cover an increased age range. The proposed framework is applied to the FGNET aging database and compared to state-of-the-art techniques and the ground truth. Appealing experimental results demonstrate the ability of the proposed method to efficiently capture both intense and subtle aging effects.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.rights© IEEEen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectAdversarial trainingen_US
dc.subjectFace agingen_US
dc.subjectImage-to-image-translationen_US
dc.subjectLatent spaceen_US
dc.subjectPyramiden_US
dc.titleLeveraging Image-to-image Translation Generative Adversarial Networks for Face Agingen_US
dc.typeConference Papersen_US
dc.collaborationAristotle University of Thessalonikien_US
dc.collaborationCyprus University of Technologyen_US
dc.subject.categoryComputer and Information Sciencesen_US
dc.countryCyprusen_US
dc.countryGreeceen_US
dc.subject.fieldNatural Sciencesen_US
dc.publicationPeer Revieweden_US
dc.relation.conferenceIEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)en_US
dc.identifier.doi10.1109/ICASSP.2019.8682965en_US
cut.common.academicyear2018-2019en_US
item.grantfulltextnone-
item.languageiso639-1en-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_c94f-
item.openairetypeconferenceObject-
item.fulltextNo Fulltext-
crisitem.author.deptDepartment of Multimedia and Graphic Arts-
crisitem.author.facultyFaculty of Fine and Applied Arts-
crisitem.author.orcid0000-0001-6841-8065-
crisitem.author.parentorgFaculty of Fine and Applied Arts-
Appears in Collections:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation
CORE Recommender
Show simple item record

SCOPUSTM   
Citations

9
checked on Nov 6, 2023

Page view(s)

280
Last Week
0
Last month
4
checked on Nov 8, 2024

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons