Pedro Cuenca's picture

Pedro Cuenca

pcuenq

AI & ML interests

None yet

Recent Activity

upvoted a collection about 15 hours ago
Hibiki fr-en
reacted to merve's post with šŸ”„ about 15 hours ago
Interesting releases in open AI this week, let's recap šŸ¤  https://huggingface.co/collections/merve/feb-7-releases-67a5f7d7f172d8bfe0dd66f4 šŸ¤– Robotics > Pi0, first open-source foundation vision-language action model was released in Le Robot (Apache 2.0) šŸ’¬ LLMs > Groundbreaking: s1 is simpler approach to test-time scaling, the release comes with small s1K dataset of 1k question-reasoning trace pairs (from Gemini-Thinking Exp) they fine-tune Qwen2.5-32B-Instruct to get s1-32B, outperforming o1-preview on math šŸ¤Æ s1-32B and s1K is out! > Adyen released DABstep, a new benchmark along with it's leaderboard demo for agents doing data analysis > Krutrim released Krutrim-2 instruct, new 12B model based on NeMo12B trained and aligned on Indic languages, a new multilingual sentence embedding model (based on STSB-XLM-R), and a translation model for Indic languages šŸ‘€ Multimodal > PKU released Align-DS-V, a model aligned using their new technique called LLF for all modalities (image-text-audio), along with the dataset Align Anything > OLA-7B is a new any-to-any model by Tencent that can take text, image, video, audio data with context window of 32k tokens and output text and speech in English and Chinese > Krutrim released Chitrarth, a new vision language model for Indic languages and English šŸ–¼ļø Vision > BiRefNet_HR is a new higher resolution BiRefNet for background removal šŸ—£ļø Audio > kyutai released Hibiki, it's a real-time speech-to-speech translation model šŸ¤Æ it's available for French-English translation > Krutrim released Dhwani, a new STT model for Indic languages > They also release a new dataset for STT-TTS šŸ–¼ļø Image Generation > Lumina released Lumina-Image-2.0, a 2B parameter-flow based DiT for text to image generation > Tencent released Hunyuan3D-2, a 3D asset generation model based on DiT and Hunyuan3D-Paint > boreal-hl-v1 is a new boring photorealistic image generation LoRA based on Hunyuan
reacted to merve's post with šŸš€ about 15 hours ago
Interesting releases in open AI this week, let's recap šŸ¤  https://huggingface.co/collections/merve/feb-7-releases-67a5f7d7f172d8bfe0dd66f4 šŸ¤– Robotics > Pi0, first open-source foundation vision-language action model was released in Le Robot (Apache 2.0) šŸ’¬ LLMs > Groundbreaking: s1 is simpler approach to test-time scaling, the release comes with small s1K dataset of 1k question-reasoning trace pairs (from Gemini-Thinking Exp) they fine-tune Qwen2.5-32B-Instruct to get s1-32B, outperforming o1-preview on math šŸ¤Æ s1-32B and s1K is out! > Adyen released DABstep, a new benchmark along with it's leaderboard demo for agents doing data analysis > Krutrim released Krutrim-2 instruct, new 12B model based on NeMo12B trained and aligned on Indic languages, a new multilingual sentence embedding model (based on STSB-XLM-R), and a translation model for Indic languages šŸ‘€ Multimodal > PKU released Align-DS-V, a model aligned using their new technique called LLF for all modalities (image-text-audio), along with the dataset Align Anything > OLA-7B is a new any-to-any model by Tencent that can take text, image, video, audio data with context window of 32k tokens and output text and speech in English and Chinese > Krutrim released Chitrarth, a new vision language model for Indic languages and English šŸ–¼ļø Vision > BiRefNet_HR is a new higher resolution BiRefNet for background removal šŸ—£ļø Audio > kyutai released Hibiki, it's a real-time speech-to-speech translation model šŸ¤Æ it's available for French-English translation > Krutrim released Dhwani, a new STT model for Indic languages > They also release a new dataset for STT-TTS šŸ–¼ļø Image Generation > Lumina released Lumina-Image-2.0, a 2B parameter-flow based DiT for text to image generation > Tencent released Hunyuan3D-2, a 3D asset generation model based on DiT and Hunyuan3D-Paint > boreal-hl-v1 is a new boring photorealistic image generation LoRA based on Hunyuan
View all activity

Organizations

Hugging Face's profile picture Google's profile picture Sentence Transformers's profile picture šŸ§ØDiffusers's profile picture PyTorch Image Models's profile picture Flax Community's profile picture Hugging Face Internal Testing Organization's profile picture DALLE mini's profile picture ControlNet 1.1 Preview's profile picture I Hackathon Somos NLP: PLN en EspaƱol's profile picture SomosNLP's profile picture Huggingface.js's profile picture HuggingFaceM4's profile picture Apple's profile picture (De)fusing's profile picture Open-Source AI Meetup's profile picture Huggingface Projects's profile picture CompVis's profile picture CompVis Community's profile picture Diffusers Pipelines Library for Stable Diffusion's profile picture Core ML Projects's profile picture LocalCodeLLMs's profile picture Code Llama's profile picture UniverseTBD's profile picture Hands-On Generative AI with Transformers and Diffusion Models's profile picture Diffusers Demo at ICCV 2023's profile picture Hugging Face TB Research's profile picture Core ML Files's profile picture huggingPartyParis's profile picture adept-hf-collab's profile picture Enterprise Explorers's profile picture Latent Consistency's profile picture TTS Eval (OLD)'s profile picture ggml.ai's profile picture kotol's profile picture LocalLLaMA's profile picture gg-hf's profile picture Mistral AI EAP's profile picture Llzama's profile picture MLX Community's profile picture Hugging Face Assignments's profile picture IBM Granite's profile picture On-device Squad's profile picture TTS AGI's profile picture Social Post Explorers's profile picture Apple CoreNet Models 's profile picture hsramall's profile picture diffusers-internal-dev's profile picture gg-tt's profile picture Hugging Face Discord Community's profile picture LLHF's profile picture SLLHF's profile picture lbhf's profile picture Hugging Quants's profile picture Meta Llama's profile picture kmhf's profile picture nltpt's profile picture s0409's profile picture Mt Metrics's profile picture nltpt-q's profile picture dummyosan's profile picture Test Org's profile picture metavision's profile picture mv's profile picture Bert ... but new's profile picture qrias's profile picture open/ acc's profile picture wut?'s profile picture DDUF's profile picture None yet's profile picture Hugging Face Agents Course's profile picture TFLite Community's profile picture s0225's profile picture

Posts 1

view post
Post
5294
OpenELM in Core ML

Apple recently released a set of efficient LLMs in sizes varying between 270M and 3B parameters. Their quality, according to benchmarks, is similar to OLMo models of comparable size, but they required half the pre-training tokens because they use layer-wise scaling, where the number of attention heads increases in deeper layers.

I converted these models to Core ML, for use on Apple Silicon, using this script: https://gist.github.com/pcuenca/23cd08443460bc90854e2a6f0f575084. The converted models were uploaded to this community in the Hub for anyone that wants to integrate inside their apps: corenet-community/openelm-core-ml-6630c6b19268a5d878cfd194

The conversion was done with the following parameters:
- Precision: float32.
- Sequence length: fixed to 128.

With swift-transformers (https://github.com/huggingface/swift-transformers), I'm getting about 56 tok/s with the 270M on my M1 Max, and 6.5 with the largest 3B model. These speeds could be improved by converting to float16. However, there's some precision loss somewhere and generation doesn't work in float16 mode yet. I'm looking into this and will keep you posted! Or take a look at this issue if you'd like to help: https://github.com/huggingface/swift-transformers/issues/95

I'm also looking at optimizing inference using an experimental kv cache in swift-transformers. It's a bit tricky because the layers have varying number of attention heads, but I'm curious to see how much this feature can accelerate performance in this model family :)

Regarding the instruct fine-tuned models, I don't know the chat template that was used. The models use the Llama 2 tokenizer, but the Llama 2 chat template, or the default Alignment Handbook one that was used to train, are not recognized. Any ideas on this welcome!

Articles 41

Article
30

Explore, Curate and Vector Search Any Hugging Face Dataset with Nomic Atlas