Update README.md
Browse files
README.md
CHANGED
@@ -192,25 +192,24 @@ dataset = load_dataset("Supa-AI/STEM-en-ms", name="data_ms")
|
|
192 |
|
193 |
This document summarizes the evaluation results for various language models based on **5-shot** and **First Token Accuracy**. The evaluation was conducted across four configurations:
|
194 |
|
195 |
-
|
196 |
| **Model** | **en\_withfigures** | **en\_withoutfigures** | **ms\_withfigures** | **ms\_withoutfigures** |
|
197 |
| --------------------------------- | ------------------- | ---------------------- | ------------------- | ---------------------- |
|
198 |
-
| **gemini-2.0-flash-exp** |
|
199 |
-
| **gemini-1.5-flash** |
|
200 |
-
| **Qwen/Qwen2-VL-72B-Instruct** |
|
201 |
-
| **gpt-4o** |
|
202 |
-
| **gpt-4o-mini** |
|
203 |
-
| **pixtral-large-2411** |
|
204 |
-
| **pixtral-12b-2409** |
|
205 |
-
| **DeepSeek-V3** | None |
|
206 |
-
| **Qwen2.5-72B-Instruct** | None |
|
207 |
-
| **Meta-Llama-3.3-70B-Instruct** | None |
|
208 |
-
| **Llama-3.2-90B-Vision-Instruct** | None |
|
209 |
-
| **sail/Sailor2-20B-Chat** | None |
|
210 |
-
| **mallam-small** | None |
|
211 |
-
| **mistral-large-latest** | None |
|
212 |
-
| **google/gemma-2-27b-it** | None |
|
213 |
-
| **SeaLLMs-v3-7B-Chat** | None |
|
214 |
|
215 |
---
|
216 |
|
@@ -218,14 +217,19 @@ This document summarizes the evaluation results for various language models base
|
|
218 |
|
219 |
In the repository, there is an `eval.py` script that can be used to run the evaluation for any other LLM.
|
220 |
|
221 |
-
|
222 |
- The "First Token Accuracy" metric emphasizes the accuracy of predicting the initial token correctly.
|
223 |
- Further analysis might be needed to determine the models' suitability for specific tasks.
|
224 |
|
|
|
|
|
|
|
|
|
|
|
225 |
---
|
226 |
|
227 |
-
**Contributors**
|
228 |
-
- [Gele](https://huggingface.co/Geleliong)
|
229 |
-
- [Ken Boon](https://huggingface.co/caibcai)
|
230 |
-
- [Wei Wen](https://huggingface.co/WeiWen21)
|
231 |
|
|
|
192 |
|
193 |
This document summarizes the evaluation results for various language models based on **5-shot** and **First Token Accuracy**. The evaluation was conducted across four configurations:
|
194 |
|
|
|
195 |
| **Model** | **en\_withfigures** | **en\_withoutfigures** | **ms\_withfigures** | **ms\_withoutfigures** |
|
196 |
| --------------------------------- | ------------------- | ---------------------- | ------------------- | ---------------------- |
|
197 |
+
| **gemini-2.0-flash-exp** | **63.70%** | <ins>75.16%</ins> | **63.36%** | <ins>75.47%</ins> |
|
198 |
+
| **gemini-1.5-flash** | 49.66% | 67.39% | 50.00% | 64.28% |
|
199 |
+
| **Qwen/Qwen2-VL-72B-Instruct** | <ins>58.22%</ins> | 69.25% | <ins>57.53%</ins> | 63.66% |
|
200 |
+
| **gpt-4o** | 47.95% | 66.15% | 50.00% | 68.01% |
|
201 |
+
| **gpt-4o-mini** | 41.10% | 55.90% | 38.36% | 52.80% |
|
202 |
+
| **pixtral-large-2411** | 42.81% | 64.29% | 35.27% | 60.87% |
|
203 |
+
| **pixtral-12b-2409** | 24.66% | 48.45% | 24.66% | 39.13% |
|
204 |
+
| **DeepSeek-V3** | None | **79.19%** | None | **76.40%** |
|
205 |
+
| **Qwen2.5-72B-Instruct** | None | 74.53% | None | 72.98% |
|
206 |
+
| **Meta-Llama-3.3-70B-Instruct** | None | 67.08% | None | 58.07% |
|
207 |
+
| **Llama-3.2-90B-Vision-Instruct** | None | 65.22% | None | 58.07% |
|
208 |
+
| **sail/Sailor2-20B-Chat** | None | 66.46% | None | 61.68% |
|
209 |
+
| **mallam-small** | None | 61.49% | None | 55.28% |
|
210 |
+
| **mistral-large-latest** | None | 60.56% | None | 53.42% |
|
211 |
+
| **google/gemma-2-27b-it** | None | 58.07% | None | 57.76% |
|
212 |
+
| **SeaLLMs-v3-7B-Chat** | None | 50.93% | None | 45.96% |
|
213 |
|
214 |
---
|
215 |
|
|
|
217 |
|
218 |
In the repository, there is an `eval.py` script that can be used to run the evaluation for any other LLM.
|
219 |
|
220 |
+
The evaluation results are based on the specific dataset and methodology employed.
|
221 |
- The "First Token Accuracy" metric emphasizes the accuracy of predicting the initial token correctly.
|
222 |
- Further analysis might be needed to determine the models' suitability for specific tasks.
|
223 |
|
224 |
+
### Attribution for Evaluation Code
|
225 |
+
The `eval.py` script is based on work from the MMLU-Pro repository:
|
226 |
+
- Repository: [TIGER-AI-Lab/MMLU-Pro](https://github.com/TIGER-AI-Lab/MMLU-Pro)
|
227 |
+
- License: Apache License 2.0 (included in the `NOTICE` file)
|
228 |
+
|
229 |
---
|
230 |
|
231 |
+
# **Contributors**
|
232 |
+
- [**Gele**](https://huggingface.co/Geleliong)
|
233 |
+
- [**Ken Boon**](https://huggingface.co/caibcai)
|
234 |
+
- [**Wei Wen**](https://huggingface.co/WeiWen21)
|
235 |
|