Recent advances in Text-to-Video generation (T2V) have achieved remarkable success in synthesizing high-quality general videos from textual descriptions. A largely overlooked problem in T2V is that existing models have not adequately encoded physical knowledge of the real world, thus generated videos tend to have limited motion and poor variations. In this paper, we propose MagicTime, a metamorphic time-lapse video generation model, which learns real-world physics knowledge from time-lapse videos and implements metamorphic generation. First, we design a MagicAdapter scheme to decouple spatial and temporal training, encode more physical knowledge from metamorphic videos, and transform pre-trained T2V models to generate metamorphic videos. Second, we introduce a Dynamic Frames Extraction strategy to adapt to metamorphic time-lapse videos, which have a wider variation range and cover dramatic object metamorphic processes, thus embodying more physical knowledge than general videos. Finally, we introduce a Magic Text-Encoder to improve the understanding of metamorphic video prompts. Furthermore, we create a time-lapse video-text dataset called ChronoMagic, specifically curated to unlock the metamorphic video generation ability. Extensive experiments demonstrate the superiority and effectiveness of MagicTime for generating high-quality and dynamic metamorphic videos, suggesting time-lapse video generation is a promising path toward building metamorphic simulators of the physical world.
Compared to general videos, metamorphic videos contain physical knowledge, long persistence, and strong variation, making them difficult to generate. We show compressed .gif on github, which loses some quality. The general videos are generated by the Animatediff and MagicTime.
Type | "Bean sprouts grow and mature from seeds" | "[...] construction in a Minecraft virtual environment" | "Cupcakes baking in an oven [...]" | "[...] from a tightly closed bud to [...]" |
General Videos | ||||
Metamorphic Videos |
Overview of the proposed MagicTime approach. We first train MagicAdapter-S to remove the influence of watermarks. Next, MagicAdapter-T is trained to generate metamorphic videos with the help of Dynamic Frames Extraction. Finally, we train a Magic Text-Encoder to enhance text comprehension ability. During the inference stage, all components need to be loaded simultaneously. Slash padding indicates the module is not used.
We showcase some metamorphic videos generated by MagicTime, MakeLongVideo, ModelScopeT2V, VideoCrafter, ZeroScope, LaVie, T2V-Zero, Latte and Animatediff below. Below are compressed .gif, which loses some quality.
MakeLongVideo | ||||
ModelScopeT2V | ||||
VideoCrafter | ||||
ZeroScope | ||||
LaVie | ||||
T2V-Zero | ||||
Latte | ||||
Animatediff | ||||
Ours |
We show more metamorphic videos generated by MagicTime with the help of Realistic, ToonYou and RcnzCartoon.
The mission of this project is to help reproduce Sora and provide high-quality video-text data and data annotation pipelines, to support Open-Sora-Plan or other DiT-based T2V models. To this end, we take an initial step to integrate our MagicTime scheme into the DiT-based Framework. Specifically, our method supports the Open-Sora-Plan v1.0.0 for fine-tuning. We first scale up with additional metamorphic landscape time-lapse videos in the same annotation framework to get the ChronoMagic-Landscape dataset. Then, we fine-tune the Open-Sora-Plan v1.0.0 with the ChronoMagic-Landscape dataset to get the MagicTime-DiT model. The results are as follows (257×512×512 (10s)):
In this work, we compiled a collection of time-lapse videos from the Internet to create a metamorphic video-text dataset containing 2,265 videos, named the ChronoMagic. We showcases samples from the dataset below. We plan to scale up the dataset to include additional categories and a larger number of videos in the future.
@article{yuan2024magictime,
title={MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators},
author={Yuan, Shenghai and Huang, Jinfa and Shi, Yujun and Xu, Yongqi and Zhu, Ruijie and Lin, Bin and Cheng, Xinhua and Yuan, Li and Luo, Jiebo},
journal={arXiv preprint arXiv:2404.05014},
year={2024}
}