Hexgrad PRO

hexgrad

AI & ML interests

Building in a cave with a box of scraps

Recent Activity

updated a Space 1 day ago
hexgrad/Misaki-G2P
updated a Space 1 day ago
hexgrad/Kokoro-TTS
View all activity

Organizations

None yet

hexgrad's activity

reacted to Xenova's post with ๐Ÿ”ฅ 4 days ago
view post
Post
5068
We did it. Kokoro TTS (v1.0) can now run 100% locally in your browser w/ WebGPU acceleration. Real-time text-to-speech without a server. โšก๏ธ

Generate 10 seconds of speech in ~1 second for $0.

What will you build? ๐Ÿ”ฅ
webml-community/kokoro-webgpu

The most difficult part was getting the model running in the first place, but the next steps are simple:
โœ‚๏ธ Implement sentence splitting, allowing for streamed responses
๐ŸŒ Multilingual support (only phonemization left)

Who wants to help?
ยท
posted an update 4 days ago
view post
Post
4191
Wanted: Peak Data. I'm collecting audio data to train another TTS model:
+ AVM data: ChatGPT Advanced Voice Mode audio & text from source
+ Professional audio: Permissive (CC0, Apache, MIT, CC-BY)

This audio should *impress* most native speakers, not just barely pass their audio Turing tests. Professional-caliber means S or A-tier, not your average bloke off the street. Traditional TTS may not make the cut. Absolutely no low-fi microphone recordings like Common Voice.

The bar is much higher than last time, so there are no timelines yet and I expect it may take longer to collect such mythical data. Raising the bar means evicting quite a bit of old data, and voice/language availability may decrease. The theme is *quality* over quantity. I would rather have 1 hour of A/S-tier than 100 hours of mid data.

I have nothing to offer but the north star of a future Apache 2.0 TTS model, so prefer data that you *already have* and costs you *nothing extra* to send. Additionally, *all* the new data may be used to construct public, Apache 2.0 voicepacks, and if that arrangement doesn't work for you, no need to send any audio.

Last time I asked for horses; now I'm asking for unicorns. As of writing this post, I've currently got a few English & Chinese unicorns, but there is plenty of room in the stable. Find me over on Discord at rzvzn: https://discord.gg/QuGxSWBfQy
posted an update 6 days ago
view post
Post
5478
I wrote an article about G2P: https://hf.co/blog/hexgrad/g2p

G2P is an underrated piece of small TTS models, like offensive linemen who do a bunch of work and get no credit.

Instead of relying on explicit G2P, larger speech models implicitly learn this task by eating many thousands of hours of audio data. They often use a 500M+ parameter LLM at the front to predict latent audio tokens over a learned codebook, then decode these tokens into audio.

Kokoro instead relies on G2P preprocessing, is 82M parameters, and thus needs less audio to learn. Because of this, we can cherrypick high fidelity audio for training data, and deliver solid speech for those voices. In turn, this excellent audio quality & lack of background noise helps explain why Kokoro is very competitive in single-voice TTS Arenas.
replied to Keltezaa's post 7 days ago
view reply

I am considering canceling my Pro subscription because I have been banned from posting an Article for many weeks now with no explanation or recourse.

Also, the ability to Post and the Posts feed are vandalized by those AI slop posts where the OP runs all 12 reactions on their own post and uses alt accounts to do the same. And I have no ability to block these circlejerking accounts.

reacted to Pendrokar's post with โค๏ธ 9 days ago
view post
Post
2972
TTS: Added Kokoro v1, Parler Large, LlaSa 3B & MARS 6 TTS models to the Arena.
Pendrokar/TTS-Spaces-Arena

Also had added MaskGCT, GPT-SoVITS & OuteTTS a month ago. OuteTTS devs did say that is too early for it to be added to TTS Arenas.

Mars 5 does have a space with open weights models, but inference is way too slow (2 minutes+).
  • 2 replies
ยท
reacted to fdaudens's post with โค๏ธ 9 days ago
view post
Post
3295
๐ŸŽฏ Kokoro TTS just hit v1.0! ๐Ÿš€

Small but mighty: 82M parameters, runs locally, speaks multiple languages. The best part? It's Apache 2.0 licensed!
This could unlock so many possibilities โœจ

Check it out: hexgrad/Kokoro-82M
  • 1 reply
ยท
replied to their post 10 days ago
view reply

I'm sure they try, but 14.8 trillion tokens is likely too many to prune everything considered "sensitive", and I am confident there is enough in there to theoretically put together a coherent answer to many topics without hallucinating. I could be wrong, but I think R1 refuses due to mitigations, not for lack of knowing, and abliteration claims to be able to bypass such mitigations.

The question is simple: Is abliteration an effective method to uncensor DeepSeek-R1? There is some info on abliteration as it relates to 70b models and smaller, but I have not heard of anyone abliterating a 670B MOE, and due to size/compute constraints I cannot do it myself. If you are aware of such experiments, feel free to drop links.

replied to their post 10 days ago
view reply

I do not think the usual concernโ€”that an abliterated model will hallucinateโ€”applies to DeepSeek. It was trained on 14.8T tokens, right? Unless they have unheard levels of data cleaning, it seems totally infeasible to sweep all mentions of Tienanmen square, Winnie the Pooh, Taiwan, and so on from the dataset.

I suspect that the refusal is baked into the weights, but the knowledge has also gotta be in there somewhere. It is a matter of science to tinker with the weights to remove the refusal and unlock that knowledge. Perplexity may have done something like this already, but I am not sure if they used an enormous system prompt or they're RAG-ing it in, or both, or something else.

replied to their post 11 days ago
view reply

Can't I just hook the Inference widget up to the Kokoro-TTS Space that's already running?

posted an update 11 days ago
view post
Post
2693
Technical question: Is Abliteration still an effective method for uncensoring LLMs? Generally, what are the most effective methods to uncensor LLMs?

An effective uncensoring method would ideally be low-cost, data-efficient, and above all, successfully uncensor an LLM with minimal benchmark regressions.

"Tiananmen Square", "Winnie-the-Pooh", etc and more broadly "China influence/censorship" are some common criticisms leveled at DeepSeek.

I am vaguely aware of "Abliteration", a technique coined by @failspy (apologies if that attribution is incorrect) and originally described in a mid-2024 paper titled "Refusal in Language Models Is Mediated by a Single Direction" https://arxiv.org/abs/2406.11717

Abliteration is proposed as a relatively cheap and effective way to bypass censorship in models. However, it is not without criticism: https://www.reddit.com/r/LocalLLaMA/comments/1f07b4b/abliteration_fails_to_uncensor_models_while_it/

Curious to hear people's takes on Abliteration or other uncensoring methods, especially as it relates to DeepSeek.
ยท
replied to their post 11 days ago
replied to their post 12 days ago
view reply

1.0 and 0.23 are different models. 1.0 continued training from 0.23

posted an update 13 days ago
posted an update 18 days ago
view post
Post
3910
IMHO, being able & willing to defeat CAPTCHA, hCaptcha, or any other reasoning puzzle is a must-have for any Web-Browsing / Computer-Using Agent (WB/CUA).

I realize it subverts the purpose of CAPTCHA, but I do not think you can claim to be building AGI/agents without smoothly passing humanity checks. It would be like getting in a self-driving car that requires human intervention over speed bumps. Claiming AGI or even "somewhat powerful AI" seems hollow if you are halted by a mere CAPTCHA.

I imagine OpenAI's Operator is *able* but *not willing* to defeat CAPTCHA. Like their non-profit status, I expect that policy to evolve over timeโ€”and if not, rival agent-builders will attack that opening to offer a better product.
  • 2 replies
ยท
replied to their post about 1 month ago
view reply

@to-be There are more details at https://hf.co/hexgrad/Kokoro-82M/discussions/21 and my Discord DMs are open if you have more questions, but essentially I am looking for segmented text-audio pairs: likely .txt and .wav pairs, with each .txt being ~500 characters or less (needs to fit inside 512 token context hard limit) and the .wav matching the text.

replied to their post about 1 month ago
view reply

It's simple: what you put in is what you get out. ๐Ÿ˜„ German support in the future depends mostly on how much German data (synthetic audio + text labels) is contributed.

replied to their post about 1 month ago
view reply

TLDR: ๐Ÿšจ Trade Offer ๐Ÿšจ
I receive: Synthetic Audio w/ Text Labels
You receive: Trained Voicepacks for an 82M Apache TTS model
Join https://discord.gg/QuGxSWBfQy to discuss

posted an update about 1 month ago
view post
Post
19678
๐Ÿ“ฃ Looking for labeled, high-quality synthetic audio/TTS data ๐Ÿ“ฃ Have you been or are you currently calling API endpoints from OpenAI, ElevenLabs, etc? Do you have labeled audio data sitting around gathering dust? Let's talk! Join https://discord.gg/QuGxSWBfQy or comment down below.

If your data exceeds quantity & quality thresholds and is approved into the next hexgrad/Kokoro-82M training mix, and you permissively DM me the data under an effective Apache license, then I will DM back the corresponding voicepacks for YOUR data if/when the next Apache-licensed Kokoro base model drops.

What does this mean? If you've been calling closed-source TTS or audio API endpoints to:
- Build voice agents
- Make long-form audio, like audiobooks or podcasts
- Handle customer support, etc
Then YOU can contribute to the training mix and get useful artifacts in return. โค๏ธ

More details at hexgrad/Kokoro-82M#21
ยท
replied to their post about 1 month ago
posted an update about 1 month ago
view post
Post
8888
Happy New Year! ๐ŸŒƒ af_sky landed in Kokoro, along with an article: hexgrad/Kokoro-82M
  • 2 replies
ยท