Abstract
This paper introduces Goku, a state-of-the-art family of joint image-and-video generation models leveraging rectified flow Transformers to achieve industry-leading performance. We detail the foundational elements enabling high-quality visual generation, including the data curation pipeline, model architecture design, flow formulation, and advanced infrastructure for efficient and robust large-scale training. The Goku models demonstrate superior performance in both qualitative and quantitative evaluations, setting new benchmarks across major tasks. Specifically, Goku achieves 0.76 on GenEval and 83.65 on DPG-Bench for text-to-image generation, and 84.85 on VBench for text-to-video tasks. We believe that this work provides valuable insights and practical advancements for the research community in developing joint image-and-video generation models.
Community
Holy
We made a deep dive video for this paper: https://www.youtube.com/watch?v=mwXIWcOXu8g.
"Kamehameha! Transform text into video—just like that!"
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- IMAGINE-E: Image Generation Intelligence Evaluation of State-of-the-art Text-to-Image Models (2025)
- Generative Video Propagation (2024)
- Open-Sora: Democratizing Efficient Video Production for All (2024)
- Pushing the Boundaries of State Space Models for Image and Video Generation (2025)
- Efficient Scaling of Diffusion Transformers for Text-to-Image Generation (2024)
- SUGAR: Subject-Driven Video Customization in a Zero-Shot Manner (2024)
- BlobGEN-Vid: Compositional Text-to-Video Generation with Blob Video Representations (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Weights wen? 👀
Models citing this paper 0
No model linking this paper