Personalized Age Transformation Using Diffusion Model
Age transformation of facial images is a task that involves modifying a person’s appearance to make them look older or younger while maintaining their identity. While deep learning methods have been successful in creating natural age transformations, they often fail to capture the individual-specific features influenced by a person’s life history. In this paper, the authors propose a novel approach for personalized age transformation using a diffusion model.
The authors’ diffusion model takes a facial image and a target age as input and generates an age-edited face image as output. This model is able to capture not only the average age transitions but also the individual-specific appearances influenced by their life histories. To achieve this, the authors incorporate additional supervision using self-reference images, which are facial images of the same person at different ages.
The authors fine-tune a pretrained diffusion model for personalized adaptation using approximately 3 to 5 self-reference images. This allows the model to learn and understand the unique characteristics of the individual’s aging process. By incorporating self-reference images, the model is able to better preserve the identity of the person while performing age editing.
In addition to using self-reference images, the authors also design an effective prompt to further enhance the performance of age editing and identity preservation. The prompt serves as a guiding signal for the diffusion model, helping it generate more accurate and visually pleasing age-edited face images.
The experiments conducted by the authors demonstrate that their proposed method outperforms existing methods both quantitatively and qualitatively. The personalized age transformation achieved by the diffusion model is superior in terms of preserving individual-specific appearances and maintaining identity.
This research has significant implications in various domains including entertainment, forensics, and cosmetic industries. The ability to accurately and realistically age-transform facial images can be used in applications such as creating age-progressed images of missing persons or simulating the effects of aging for entertainment purposes.
The availability of the code and pretrained models further enhances the practicality of this research. By making these resources accessible to the public, researchers and developers can easily implement and build upon the proposed method.
In conclusion, the authors’ personalized age transformation method using a diffusion model and self-reference images is a significant advancement in the field. This approach not only achieves superior performance in age editing and identity preservation but also opens up new possibilities for personalized image transformation.