EdinburghNLP - Natural Language Processing Group at the University of Edinburgh

university

AI & ML interests

We do research in all core areas of natural language processing, including morphology, parsing, semantics, discourse, language generation, and machine translation. EdinburghNLP also has a strong track record of work at the interface of NLP with other areas, including speech technology, machine learning, computer vision, cognitive modeling, social media, information retrieval, robotics, bioinformatics, and educational technology.

Recent Activity

EdinburghNLP's activity

albertvillanovaΒ 
posted an update 7 days ago
view post
Post
2941
πŸš€ Introducing @huggingface Open Deep-ResearchπŸ’₯

In just 24 hours, we built an open-source agent that:
βœ… Autonomously browse the web
βœ… Search, scroll & extract info
βœ… Download & manipulate files
βœ… Run calculations on data

55% on GAIA validation set! Help us improve it!πŸ’‘
https://huggingface.co/blog/open-deep-research
  • 3 replies
Β·
albertvillanovaΒ 
posted an update about 1 month ago
albertvillanovaΒ 
posted an update 3 months ago
view post
Post
1777
🚨 How green is your model? 🌱 Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!
πŸ‘‰ open-llm-leaderboard/comparator
Now, you can not only compare models by performance, but also by their environmental footprint!

🌍 The Comparator calculates COβ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... πŸ› οΈ
Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
albertvillanovaΒ 
posted an update 3 months ago
view post
Post
1584
πŸš€ New feature of the Comparator of the πŸ€— Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!

πŸ› οΈ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!

Ready to dive in? πŸ† Try the πŸ€— Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator 🌐
albertvillanovaΒ 
posted an update 4 months ago
view post
Post
3165
πŸš€ Exciting update! You can now compare multiple models side-by-side with the Hugging Face Open LLM Comparator! πŸ“Š

open-llm-leaderboard/comparator

Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?