Please use this identifier to cite or link to this item:
Title: Leveraging Image-to-image Translation Generative Adversarial Networks for Face Aging
Authors: Pantraki, Evangelia 
Kotropoulos, Constantine L. 
Lanitis, Andreas 
Major Field of Science: Natural Sciences
Field Category: Computer and Information Sciences
Keywords: Adversarial training;Face aging;Image-to-image-translation;Latent space;Pyramid
Issue Date: 17-Apr-2019
Source: IEEE International Conference on Acoustics, Speech and Signal Processing, 2019, 12-17 May, Brighton, United Kingdom
Conference: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 
Abstract: Here, face images of a specific age class are translated to images of different age classes in an unsupervised manner that enables training on independent sets of images for each age class. In order to learn pairwise translations between age classes, we adopt the UNsupervised Image-to-image Translation framework that employs Variational AutoEncoders and Generative Adversarial Networks. By mapping face images of different age classes to shared latent representations, the most personalized and abstract facial characteristics are preserved. To effectively diffuse age class information, a pyramid of local, neighbour, and global encoders is employed so that the latent representations progressively cover an increased age range. The proposed framework is applied to the FGNET aging database and compared to state-of-the-art techniques and the ground truth. Appealing experimental results demonstrate the ability of the proposed method to efficiently capture both intense and subtle aging effects.
ISBN: 978-1-4799-8131-1
DOI: 10.1109/ICASSP.2019.8682965
Rights: © IEEE
Type: Conference Papers
Affiliation : Aristotle University of Thessaloniki 
Cyprus University of Technology 
Appears in Collections:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation

CORE Recommender
Show full item record

Citations 20

checked on Nov 6, 2023

Page view(s) 20

Last Week
Last month
checked on Dec 6, 2023

Google ScholarTM



This item is licensed under a Creative Commons License Creative Commons