Simeon Emanuilov PRO

s-emanuilov

AI & ML interests

Software Engineer & Ph.D. candidate | Specializing in ML/DL system development & applying AI to solve real-world business problems.

Recent Activity

View all activity

Organizations

AI Lab - Sofia University's profile picture Scaleflex's profile picture UnfoldAI's profile picture

s-emanuilov's activity

replied to their post about 3 hours ago
view reply

try to reduce gpu_memory_utilization to some lower coefficient

replied to their post 2 days ago
view reply

Thank you.

Iโ€™m also a big fan of Qwen models. However, in this case, I donโ€™t think they are appropriate because Iโ€™m not entirely confident in their capabilities regarding multilingual contexts. Thatโ€™s why I chose Llama.

Overall, I agree that the Qwen series is excellent for most tasks.

posted an update 2 days ago
view post
Post
4794
Tutorial ๐Ÿ’ฅ Training a non-English reasoning model with GRPO and Unsloth

I wanted to share my experiment with training reasoning models in languages other than English/Chinese.

Using Llama 3.1 8B as base, GRPO trainer from trl, and Unsloth optimizations, I got a working prototype in Bulgarian after ~5 hours on an L40S GPU. The approach should work for any language where the base model has some pre-training coverage.

Full code and tutorial here: https://unfoldai.com/reasoning-in-a-non-english-language/

The model itself: s-emanuilov/LLMBG-Llama-3.1-8B-BG-Reasoning-v0.1

I hope this helps anyone looking to build reasoning models in their language.
ยท
reacted to m-ric's post with ๐Ÿ”ฅ 6 days ago
view post
Post
9173
Introducing ๐—ผ๐—ฝ๐—ฒ๐—ป ๐——๐—ฒ๐—ฒ๐—ฝ-๐—ฅ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต by Hugging Face! ๐Ÿ’ฅ

OpenAI's latest agentic app Deep Research seems really good... But it's closed, as usual.

โฑ๏ธ So with a team of cracked colleagues, we set ourselves a 24hours deadline to replicate and open-source Deep Research! โฑ๏ธ

โžก๏ธ We built open-Deep-Research, an entirely open agent that can: navigate the web autonomously, scroll and search through pages, download and manipulate files, run calculation on data...

We aimed for the best performance: are the agent's answers really rigorous?

On GAIA benchmark, Deep Research had 67% accuracy on the validation set.
โžก๏ธ open Deep Research is at 55% (powered by o1), it is:
- the best pass@1 solution submitted
- the best open solution ๐Ÿ’ช๐Ÿ’ช

And it's only getting started ! Please jump in, drop PRs, and let's bring it to the top !

Read the blog post ๐Ÿ‘‰ https://huggingface.co/blog/open-deep-research
upvoted an article 6 days ago
view article
Article

Open-source DeepResearch โ€“ Freeing our search agents

โ€ข 906
upvoted an article 9 days ago
view article
Article

Finally, a Replacement for BERT: Introducing ModernBERT

โ€ข 531