SwapAnyone: Consistent and Realistic Video Synthesis for Swapping Any Person into Any Video

ChengShu Zhao1*, Yunyang Ge1,2*, Xinhua Cheng1,2, Bin Zhu1,2, Yatian Pang3, Bin Lin1,2,
Fan Yang1, Feng Gao†1†, Li Yuan1†
1Peking University     2Rabbitpre Intelligence     3National University of Singapore
arXiv code
Interpolate start reference image.

SwapAnyone allows users to provide a reference body image anda target video from any source, then seamlessly swap the provided body with the original body in the target video to produce a highly realistic video.

Abstract

Video body-swapping aims to replace the body in an existing video with a new body from arbitrary sources, which has garnered more attention in recent years. Existing methods treat video body-swapping as a composite of multiple tasks instead of an independent task and typically rely on various models to achieve video body-swapping sequentially. However, these methods fail to achieve ,end-to-end optimization for the video body-swapping which causes issues such as variations in luminance among frames, disorganized occlusion relationships, and the noticeable separation between bodies and background. In this work, we define video body-swapping as an independent task and propose three critical consistencies: identity consistency, motion consistency, and environment consistency. We introduce an end-to-end model named SwapAnyone, treating video body-swapping as a video inpainting task with reference fidelity and motion control. To improve the ability to maintain environmental harmony, particularly luminance harmony in the resulting video, we introduce a novel EnvHarmony strategy for training our model progressively. Additionally, we provide a dataset named HumanAction-32K covering various videos about human actions. Extensive experiments demonstrate that our method achieves State-Of-The-Art (SOTA) performance among open-source methods while approaching or surpassing closed-source models across multiple dimensions. All code, weights, and HumanAction-32K dataset will be open-sourced at https://github.com/PKU-YuanGroup/SwapAnyone .

Method Overview

Interpolate start reference image.

Firstly, the user-provided reference body image and corresponding DWpose image are processed by the ID Extraction Module. Simultaneously, the DWpose sequence of the body in the target video is sent to the Motion Control Module to extract motion features, which are incorporated into the latents. Subsequently, the latents are then passed into the Inpainting UNet, which integrates features from the ID Extraction Module via self-attention operation together. Meanwhile, the reference body image is processed by CLIP image encoder to extract features, enabling semantic integration via cross-attention in both the ID Extraction Module and the Inpainting UNet. After denoising, the model outputs a resulting video that replaces the body in the target video with the reference body.



Our Results

Comparisons with others

BibTeX

@misc{SwapAnyone,
        title={SwapAnyone: Consistent and Realistic Video Synthesis for Swapping Any Person into Any Video}, 
        author={Chengshu Zhao and Yunyang Ge and Xinhua Cheng and Bin Zhu and Yatian Pang and Bin Lin and Fan Yang and Feng Gao and Li Yuan},
        year={2025},
        eprint={2503.09154},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2503.09154}, 
    }