gguf/fp8 quantized version of video2world and text2world (test in progress)
setup (once)
- drag cosmos-7b-text2world-q4_k_m.gguf [4.07GB] to > ./ComfyUI/models/diffusion_models
- drag oldt5_xxl_fp8_e4m3fn.safetensors [4.9GB] to > ./ComfyUI/models/text_encoders
- drag cosmos_cv8x8x8_1.0_vae_bf16.safetensors [211MB] to > ./ComfyUI/models/vae
run it straight (no installation needed way)
- run the .bat file in the main directory (assuming you are using the gguf-node pack below)
- drag the workflow json file (below), or the sample webp file, to > your browser
workflow
- example workflow for text2world
- example workflow for video2world
review
- working roughly; but not very stable/consistent for the time being
- gguf with pig architecture is working right away; welcome to test
reference
- base model from nvidia (text2world:7b|14b & video2world:7b|14b)
- pig architecture from connector
- comfyui from comfyanonymous
- gguf-node (pypi|repo|pack)
- Downloads last month
- 1,620
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for calcuis/cosmos
Base model
nvidia/Cosmos-1.0-Diffusion-7B-Text2World