Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN
Badour AlBahar
Abstract
We present an algorithm for re-rendering a person from a single image under arbitrary poses. Existing methods often have difficulties in hallucinating occluded contents photo-realistically while preserving the identity and fine details in the source image. We first learn to inpaint the correspondence field between the body surface texture and the source image with a human body symmetry prior. The inpainted correspondence field allows us to transfer/warp local features extracted from the source to the target view even under large pose changes. Directly mapping the warped local features to an RGB image using a simple CNN decoder often leads to visible artifacts. Thus, we extend the StyleGAN generator so that it takes pose as input (for controlling poses) and introduces a spatially varying modulation for the latent space using the warped local features (for controlling appearances). We show that our method compares favorably against the state-of-the-art algorithms in both quantitative evaluation and visual comparison.
People
Publication Details
- Date of publication:
- September 13, 2021
- Journal:
- Cornell University
- Publication note:
Badour AlBahar, Jingwan Lu, Jimei Yang, Zhixin Shu, Eli Shechtman, Jia-Bin Huang:Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN. CoRR abs/2109.06166 (2021)