Diffusion Forcing Transformer
Kiwhan Song*1
·
Boyuan Chen*1
·
Max Simchowitz2
·
Yilun Du3
·
Russ Tedrake1
·
Vincent Sitzmann1
*Equal contribution 1MIT 2CMU 3Harvard
Paper | Website | HuggingFace Demo | GitHub Code
This is the official model hub for the paper History-guided Video Diffusion. We introduce the Diffusion Forcing Tranformer (DFoT), a novel video diffusion model that designed to generate videos conditioned on an arbitrary number of context frames. Additionally, we present History Guidance (HG), a family of guidance methods uniquely enabled by DFoT. These methods significantly enhance video generation quality, temporal consistency, and motion dynamics, while also unlocking new capabilities such as compositional video generation and the stable rollout of extremely long videos.
🤗 Try generating videos with DFoT!
We provide an interactive demo on HuggingFace Spaces, where you can generate videos with DFoT and History Guidance. On the RealEstate10K dataset, you can generate:
- Any Number of Images → Short 2-second Video
- Single Image → Long 10-second Video
- Single Image → Endless Navigation Video (like the teaser above!)
Please check it out and have fun generating videos with DFoT!
🚀 Usage
All pretrained models can be automatically loaded from our GitHub codebase. Please visit our repository for further instructions!
📌 Citation
If our work is useful for your research, please consider citing our paper:
@misc{song2025historyguidedvideodiffusion,
title={History-Guided Video Diffusion},
author={Kiwhan Song and Boyuan Chen and Max Simchowitz and Yilun Du and Russ Tedrake and Vincent Sitzmann},
year={2025},
eprint={2502.06764},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.06764},
}
- Downloads last month
- 2