Identity-Preserving Text-to-Video Generation by Frequency Decomposition

1Peking University, 2Peng Cheng Laboratory, 3University of Rochester, 4National University of Singapore,

ConsisID can generate high-quality Identity-Preserving videos!

Abstract

Identity-preserving text-to-video (IPT2V) generation aims to create high-fidelity videos with consistent human identity. It is an important task in video generation but remains an open problem for generative models. This paper pushes the technical frontier of IPT2V in two directions that have not been resolved in the literature: (1) A tuning-free pipeline without tedious case-by-case finetuning, and (2) A frequency-aware heuristic identity-preserving Diffusion Transformer (DiT)-based control scheme. To achieve these goals, we propose ConsisID, a tuning-free DiT-based controllable IPT2V model to keep human-identity consistent in the generated video. Inspired by prior findings in frequency analysis of vision/diffusion transformers, it employs identity-control signals in the frequency domain, where facial features can be decomposed into low-frequency global features (e.g., profile, proportions) and high-frequency intrinsic features (e.g., identity markers that remain unaffected by pose changes). First, from a low-frequency perspective, we introduce a global facial extractor, which encodes the reference image and facial key points into a latent space, generating features enriched with low-frequency information. These features are then integrated into the shallow layers of the network to alleviate training challenges associated with DiT. Second, from a high-frequency perspective, we design a local facial extractor to capture high-frequency details and inject them into the transformer blocks, enhancing the model's ability to preserve fine-grained features. To leverage the frequency information for identity preservation, we propose a hierarchical training strategy, transforming a vanilla pre-trained video generation model into an IPT2V model. Extensive experiments demonstrate that our frequency-aware heuristic scheme provides an optimal control solution for DiT-based models. Thanks to this scheme, our ConsisID achieves excellent results in generating high-quality, identity-preserving videos, making strides towards more effective IPT2V.

Video


or you can click here to watch the video.

Findings of DiT

Finding 1

Shallow (e.g., low-level, low-frequency) features are essential for pixel-level prediction tasks in diffusion models, as they ease model training. U-Net facilitates model convergence by aggregating shallow features to the decoder via long skip connections, a mechanism that DiT does not incorporate.

Finding 2

Transformers have limited perception of high-frequency information, which is important for preserving facial features. The encoder-decoder architecture of U-Net naturally possesses multi-scale features (e.g., richness in high-frequency), while DiT lacks a comparable structure.

Overview of ConsisID Model

framework

Based on Findings of DiT, low-frequency facial information is embedded into the shallow layers, while high-frequency information is incorporated into the vision tokens within the attention blocks. The ID-preserving Recipe is applied to ease training and improve generalization. The cross face, DropToken and Dropout are executed based on probability.

Comparison

Gallery

Samples of our In-House Dataset

BibTeX

@misc{yuan2024identitypreservingtexttovideogenerationfrequency,
      title={Identity-Preserving Text-to-Video Generation by Frequency Decomposition}, 
      author={Shenghai Yuan and Jinfa Huang and Xianyi He and Yunyuan Ge and Yujun Shi and Liuhan Chen and Jiebo Luo and Li Yuan},
      year={2024},
      eprint={2411.17440},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.17440}, 
}