Did you not notice that each ZeroGPU space has 12 or was it 24 core server-type CPU? That is more powerful than what you get with a CPU-Upgrade space. And you get 10 for $9!!! A bargain!
Yanis L PRO
AI & ML interests
Recent Activity
Organizations
Pendrokar's activity
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
G2P is an underrated piece of small TTS models, like offensive linemen who do a bunch of work and get no credit.
Instead of relying on explicit G2P, larger speech models implicitly learn this task by eating many thousands of hours of audio data. They often use a 500M+ parameter LLM at the front to predict latent audio tokens over a learned codebook, then decode these tokens into audio.
Kokoro instead relies on G2P preprocessing, is 82M parameters, and thus needs less audio to learn. Because of this, we can cherrypick high fidelity audio for training data, and deliver solid speech for those voices. In turn, this excellent audio quality & lack of background noise helps explain why Kokoro is very competitive in single-voice TTS Arenas.
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
Generate 10 seconds of speech in ~1 second for $0.
What will you build? ๐ฅ
webml-community/kokoro-webgpu
The most difficult part was getting the model running in the first place, but the next steps are simple:
โ๏ธ Implement sentence splitting, allowing for streamed responses
๐ Multilingual support (only phonemization left)
Who wants to help?
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
Which model out of the 8 models listed on my post?
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
Pendrokar/TTS-Spaces-Arena
Also had added MaskGCT, GPT-SoVITS & OuteTTS a month ago. OuteTTS devs did say that is too early for it to be added to TTS Arenas.
Mars 5 does have a space with open weights models, but inference is way too slow (2 minutes+).
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
Small but mighty: 82M parameters, runs locally, speaks multiple languages. The best part? It's Apache 2.0 licensed!
This could unlock so many possibilities โจ
Check it out: hexgrad/Kokoro-82M
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
pip install kokoro
, and still 82M parameters.GitHub: https://github.com/hexgrad/kokoro
PyPI: https://pypi.org/project/kokoro/
Space: hexgrad/Kokoro-TTS
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
After 4000 votes F5 TTS fell near the bottom of the leaderboard, I extracted some sample from Emilia. Let us see if that changes anything.
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
stabilityai/stable-point-aware-3d
here's how it looks, with TRELLIS for comparison
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
If your data exceeds quantity & quality thresholds and is approved into the next hexgrad/Kokoro-82M training mix, and you permissively DM me the data under an effective Apache license, then I will DM back the corresponding voicepacks for YOUR data if/when the next Apache-licensed Kokoro base model drops.
What does this mean? If you've been calling closed-source TTS or audio API endpoints to:
- Build voice agents
- Make long-form audio, like audiobooks or podcasts
- Handle customer support, etc
Then YOU can contribute to the training mix and get useful artifacts in return. โค๏ธ
More details at hexgrad/Kokoro-82M#21
The original Arena's threshold is at 700 votes. But I am sure Kokoro will hold the position. The voice quality actually sounds close to ElevenLabs.
But StyleTTS usually is not very emotional. So it will fail where Edge TTS does. The phrases where the voice has to be sad or angry. For example Parler Expresso was overly jolly.
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
self.brag():
Kokoro finally got 300 votes in
Pendrokar/TTS-Spaces-Arena after
@Pendrokar
was kind enough to add it 3 weeks ago.Discounting the small sample size of votes, I think it is safe to say that hexgrad/Kokoro-TTS is currently a top 3 model among the contenders in that Arena. This is notable because:
- At 82M params, Kokoro is one of the smaller models in the Arena
- MeloTTS has 52M params
- F5 TTS has 330M params
- XTTSv2 has 467M params
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
It's expressive, punches way above its weight class, and supports voice cloning. Go check it out! ๐
(Unmute the audio sample below after hitting play)
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
True, a sample from the original dataset would probably be the best. My attempt to try to fetch one from Emilia dataset was unsuccessful as HF dataset viewer can only show the German samples. Emilia's homepage has a ASMR-y example prompt given.
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
True about the narration style sample, but that still did not stop XTTS in surpassing F5. Both use the same sample.
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
The voice sample used is the same as XTTS. F5 has so far been unstable, being unemotional/monotone/depressed and mispronouncing words (_awestruck_).
If you have suggestions please give feedback in the following thread:
mrfakename/E2-F5-TTS#32
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63d52e0c4e5642795617f668/ztXLrdFz3gkUJUIIQXfHo.png)
Pendrokar/TTS-Spaces-Arena
Svngoku/maskgct-audio-lab
hexgrad/Kokoro-TTS
I chose @Svngoku 's forked HF space over amphion's due to the overly high ZeroGPU duration demand on the latter. 300s!
amphion/maskgct
Had to remove @mrfakename 's MetaVoice-1B Space from the available models as that space has been down for quite some time. ๐ค๏ธ
mrfakename/MetaVoice-1B-v0.1
I'm close to syncing the code to the original Arena's code structure. Then I'd like to use ASR in order to validate and create synthetic public datasets from the generated samples. And then make the Arena multilingual, which will surely attract quite the crowd!