Video body-swapping aims to replace the body in an existing video with a new body from arbitrary sources, which has garnered more attention in recent years. Existing methods treat video body-swapping as a composite of multiple tasks instead of an independent task and typically rely on various models to achieve video body-swapping sequentially. However, these methods fail to achieve ,end-to-end optimization for the video body-swapping which causes issues such as variations in luminance among frames, disorganized occlusion relationships, and the noticeable separation between bodies and background. In this work, we define video body-swapping as an independent task and propose three critical consistencies: identity consistency, motion consistency, and environment consistency. We introduce an end-to-end model named SwapAnyone, treating video body-swapping as a video inpainting task with reference fidelity and motion control. To improve the ability to maintain environmental harmony, particularly luminance harmony in the resulting video, we introduce a novel EnvHarmony strategy for training our model progressively. Additionally, we provide a dataset named HumanAction-32K covering various videos about human actions. Extensive experiments demonstrate that our method achieves State-Of-The-Art (SOTA) performance among open-source methods while approaching or surpassing closed-source models across multiple dimensions. All code, weights, and HumanAction-32K dataset will be open-sourced at https://github.com/PKU-YuanGroup/SwapAnyone .
Firstly, the user-provided reference body image and corresponding DWpose image are processed by the ID Extraction Module. Simultaneously, the DWpose sequence of the body in the target video is sent to the Motion Control Module to extract motion features, which are incorporated into the latents. Subsequently, the latents are then passed into the Inpainting UNet, which integrates features from the ID Extraction Module via self-attention operation together. Meanwhile, the reference body image is processed by CLIP image encoder to extract features, enabling semantic integration via cross-attention in both the ID Extraction Module and the Inpainting UNet. After denoising, the model outputs a resulting video that replaces the body in the target video with the reference body.
Reference
Target Video
Resulting Video
Reference
Target Video
Resulting Video
Reference
Target Video
Resulting Video
Reference
Target Video
Resulting Video
Reference
Target Video
Resulting Video
Reference
Target Video
Resulting Video
Reference
Target Video
Resulting Video
Reference
Target Video
Resulting Video
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
Reference
Viggle
INP+MimicMotion
Target video
INP+IP+ContlN
SwapAnyone
@misc{SwapAnyone,
title={SwapAnyone: Consistent and Realistic Video Synthesis for Swapping Any Person into Any Video},
author={Chengshu Zhao and Yunyang Ge and Xinhua Cheng and Bin Zhu and Yatian Pang and Bin Lin and Fan Yang and Feng Gao and Li Yuan},
year={2025},
eprint={2503.09154},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.09154},
}