Update README.md
Browse files
README.md
CHANGED
@@ -192,33 +192,25 @@ dataset = load_dataset("Supa-AI/STEM-en-ms", name="data_ms")
|
|
192 |
|
193 |
This document summarizes the evaluation results for various language models based on **5-shot** and **First Token Accuracy**. The evaluation was conducted across four configurations:
|
194 |
|
195 |
-
- **en_withfigures**: English prompts including figures.
|
196 |
-
- **en_withoutfigures**: English prompts excluding figures.
|
197 |
-
- **ms_withfigures**: Malay prompts including figures.
|
198 |
-
- **ms_withoutfigures**: Malay prompts excluding figures.
|
199 |
|
200 |
-
|
201 |
-
|
202 |
-
|
203 |
-
|
204 |
-
| **
|
205 |
-
|
206 |
-
| **
|
207 |
-
| **
|
208 |
-
| **
|
209 |
-
| **
|
210 |
-
| **
|
211 |
-
| **
|
212 |
-
| **
|
213 |
-
| **
|
214 |
-
| **
|
215 |
-
| **
|
216 |
-
| **
|
217 |
-
| **
|
218 |
-
| **mallam-small** | None | __61.49%__ | None | __55.28%__ |
|
219 |
-
| **mistral-large-latest** | None | __60.56%__ | None | __53.42%__ |
|
220 |
-
| **google/gemma-2-27b-it** | None | __58.07%__ | None | __57.76%__ |
|
221 |
-
| **SeaLLMs-v3-7B-Chat** | None | __50.93%__ | None | __45.96%__ |
|
222 |
|
223 |
---
|
224 |
|
@@ -226,7 +218,6 @@ This document summarizes the evaluation results for various language models base
|
|
226 |
|
227 |
In the repository, there is an `eval.py` script that can be used to run the evaluation for any other LLM.
|
228 |
|
229 |
-
|
230 |
- The evaluation results are based on the specific dataset and methodology employed.
|
231 |
- The "First Token Accuracy" metric emphasizes the accuracy of predicting the initial token correctly.
|
232 |
- Further analysis might be needed to determine the models' suitability for specific tasks.
|
|
|
192 |
|
193 |
This document summarizes the evaluation results for various language models based on **5-shot** and **First Token Accuracy**. The evaluation was conducted across four configurations:
|
194 |
|
|
|
|
|
|
|
|
|
195 |
|
196 |
+
| **Model** | **en\_withfigures** | **en\_withoutfigures** | **ms\_withfigures** | **ms\_withoutfigures** |
|
197 |
+
| --------------------------------- | ------------------- | ---------------------- | ------------------- | ---------------------- |
|
198 |
+
| **gemini-2.0-flash-exp** | ******63.70%****** | ******75.16%****** | ****_63.36%_**** | ******75.47%****** |
|
199 |
+
| **gemini-1.5-flash** | **49.66%** | **_67.39%_** | **_50.00%_** | **_64.28%_** |
|
200 |
+
| **Qwen/Qwen2-VL-72B-Instruct** | **_58.22%_** | **_69.25%_** | **_57.53%_** | **63.66%** |
|
201 |
+
| **gpt-4o** | **47.95%** | **66.15%** | **50.00%** | **68.01%** |
|
202 |
+
| **gpt-4o-mini** | **41.10%** | **55.90%** | **38.36%** | **52.80%** |
|
203 |
+
| **pixtral-large-2411** | **42.81%** | **64.29%** | **35.27%** | **60.87%** |
|
204 |
+
| **pixtral-12b-2409** | **24.66%** | **48.45%** | **24.66%** | **39.13%** |
|
205 |
+
| **DeepSeek-V3** | None | ****79.19%**** | None | **__76.40%__** |
|
206 |
+
| **Qwen2.5-72B-Instruct** | None | **__74.53%__** | None | **72.98%** |
|
207 |
+
| **Meta-Llama-3.3-70B-Instruct** | None | **67.08%** | None | **58.07%** |
|
208 |
+
| **Llama-3.2-90B-Vision-Instruct** | None | **65.22%** | None | **58.07%** |
|
209 |
+
| **sail/Sailor2-20B-Chat** | None | **_66.46%_** | None | **61.68%** |
|
210 |
+
| **mallam-small** | None | **61.49%** | None | **55.28%** |
|
211 |
+
| **mistral-large-latest** | None | **60.56%** | None | **53.42%** |
|
212 |
+
| **google/gemma-2-27b-it** | None | **58.07%** | None | **57.76%** |
|
213 |
+
| **SeaLLMs-v3-7B-Chat** | None | **50.93%** | None | **45.96%** |
|
|
|
|
|
|
|
|
|
214 |
|
215 |
---
|
216 |
|
|
|
218 |
|
219 |
In the repository, there is an `eval.py` script that can be used to run the evaluation for any other LLM.
|
220 |
|
|
|
221 |
- The evaluation results are based on the specific dataset and methodology employed.
|
222 |
- The "First Token Accuracy" metric emphasizes the accuracy of predicting the initial token correctly.
|
223 |
- Further analysis might be needed to determine the models' suitability for specific tasks.
|