Supa-AI commited on
Commit
6955679
·
verified ·
1 Parent(s): 705e32a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -27
README.md CHANGED
@@ -192,33 +192,25 @@ dataset = load_dataset("Supa-AI/STEM-en-ms", name="data_ms")
192
 
193
  This document summarizes the evaluation results for various language models based on **5-shot** and **First Token Accuracy**. The evaluation was conducted across four configurations:
194
 
195
- - **en_withfigures**: English prompts including figures.
196
- - **en_withoutfigures**: English prompts excluding figures.
197
- - **ms_withfigures**: Malay prompts including figures.
198
- - **ms_withoutfigures**: Malay prompts excluding figures.
199
 
200
- ---
201
-
202
- ## Results Table
203
-
204
- | **Model** | **en_withfigures** | **en_withoutfigures** | **ms_withfigures** | **ms_withoutfigures** |
205
- |---------------------------------|--------------------|-----------------------|--------------------|-----------------------|
206
- | **gemini-2.0-flash-exp** | __63.70%__ | **75.16%** | __63.36%__ | **75.47%** |
207
- | **gemini-1.5-flash** | __49.66%__ | __67.39%__ | __50.00%__ | __64.28%__ |
208
- | **Qwen/Qwen2-VL-72B-Instruct** | __58.22%__ | __69.25%__ | __57.53%__ | __63.66%__ |
209
- | **gpt-4o** | __47.95%__ | __66.15%__ | __50.00%__ | __68.01%__ |
210
- | **gpt-4o-mini** | __41.10%__ | __55.90%__ | __38.36%__ | __52.80%__ |
211
- | **pixtral-large-2411** | __42.81%__ | __64.29%__ | __35.27%__ | __60.87%__ |
212
- | **pixtral-12b-2409** | __24.66%__ | __48.45%__ | __24.66%__ | __39.13%__ |
213
- | **DeepSeek-V3** | None | **79.19%** | None | **76.40%** |
214
- | **Qwen2.5-72B-Instruct** | None | __74.53%__ | None | __72.98%__ |
215
- | **Meta-Llama-3.3-70B-Instruct** | None | __67.08%__ | None | __58.07%__ |
216
- | **Llama-3.2-90B-Vision-Instruct** | None | __65.22%__ | None | __58.07%__ |
217
- | **sail/Sailor2-20B-Chat** | None | __66.46%__ | None | __61.68%__ |
218
- | **mallam-small** | None | __61.49%__ | None | __55.28%__ |
219
- | **mistral-large-latest** | None | __60.56%__ | None | __53.42%__ |
220
- | **google/gemma-2-27b-it** | None | __58.07%__ | None | __57.76%__ |
221
- | **SeaLLMs-v3-7B-Chat** | None | __50.93%__ | None | __45.96%__ |
222
 
223
  ---
224
 
@@ -226,7 +218,6 @@ This document summarizes the evaluation results for various language models base
226
 
227
  In the repository, there is an `eval.py` script that can be used to run the evaluation for any other LLM.
228
 
229
-
230
  - The evaluation results are based on the specific dataset and methodology employed.
231
  - The "First Token Accuracy" metric emphasizes the accuracy of predicting the initial token correctly.
232
  - Further analysis might be needed to determine the models' suitability for specific tasks.
 
192
 
193
  This document summarizes the evaluation results for various language models based on **5-shot** and **First Token Accuracy**. The evaluation was conducted across four configurations:
194
 
 
 
 
 
195
 
196
+ | **Model** | **en\_withfigures** | **en\_withoutfigures** | **ms\_withfigures** | **ms\_withoutfigures** |
197
+ | --------------------------------- | ------------------- | ---------------------- | ------------------- | ---------------------- |
198
+ | **gemini-2.0-flash-exp** | ******63.70%****** | ******75.16%****** | ****_63.36%_**** | ******75.47%****** |
199
+ | **gemini-1.5-flash** | **49.66%** | **_67.39%_** | **_50.00%_** | **_64.28%_** |
200
+ | **Qwen/Qwen2-VL-72B-Instruct** | **_58.22%_** | **_69.25%_** | **_57.53%_** | **63.66%** |
201
+ | **gpt-4o** | **47.95%** | **66.15%** | **50.00%** | **68.01%** |
202
+ | **gpt-4o-mini** | **41.10%** | **55.90%** | **38.36%** | **52.80%** |
203
+ | **pixtral-large-2411** | **42.81%** | **64.29%** | **35.27%** | **60.87%** |
204
+ | **pixtral-12b-2409** | **24.66%** | **48.45%** | **24.66%** | **39.13%** |
205
+ | **DeepSeek-V3** | None | ****79.19%**** | None | **__76.40%__** |
206
+ | **Qwen2.5-72B-Instruct** | None | **__74.53%__** | None | **72.98%** |
207
+ | **Meta-Llama-3.3-70B-Instruct** | None | **67.08%** | None | **58.07%** |
208
+ | **Llama-3.2-90B-Vision-Instruct** | None | **65.22%** | None | **58.07%** |
209
+ | **sail/Sailor2-20B-Chat** | None | **_66.46%_** | None | **61.68%** |
210
+ | **mallam-small** | None | **61.49%** | None | **55.28%** |
211
+ | **mistral-large-latest** | None | **60.56%** | None | **53.42%** |
212
+ | **google/gemma-2-27b-it** | None | **58.07%** | None | **57.76%** |
213
+ | **SeaLLMs-v3-7B-Chat** | None | **50.93%** | None | **45.96%** |
 
 
 
 
214
 
215
  ---
216
 
 
218
 
219
  In the repository, there is an `eval.py` script that can be used to run the evaluation for any other LLM.
220
 
 
221
  - The evaluation results are based on the specific dataset and methodology employed.
222
  - The "First Token Accuracy" metric emphasizes the accuracy of predicting the initial token correctly.
223
  - Further analysis might be needed to determine the models' suitability for specific tasks.