Supa-AI commited on
Commit
705e32a
·
verified ·
1 Parent(s): 405eff9

Updated LLM eval leaderboard in Read.me

Browse files
Files changed (1) hide show
  1. README.md +56 -4
README.md CHANGED
@@ -52,6 +52,7 @@ dataset_info:
52
  num_bytes: 34663548
53
  num_examples: 614
54
  download_size: 69119656
 
55
  dataset_size: 69327096.0
56
  tags:
57
  - mathematics
@@ -74,9 +75,8 @@ language:
74
  - en
75
  - ms
76
  ---
77
- **STEM_Dataset_eng_ms**
78
 
79
- **A Bilingual Dataset for Evaluating Reasoning Skills in STEM Subjects**
80
 
81
  This dataset provides a comprehensive evaluation set for tasks assessing reasoning skills in Science, Technology, Engineering, and Mathematics (STEM) subjects. It features questions in both English and Malay, catering to a diverse audience.
82
 
@@ -103,6 +103,8 @@ The dataset is comprised of two configurations: `data_en` (English) and `data_ms
103
  * **Options:** Possible answer choices for the question, with keys (e.g., "A", "B", "C", "D") and corresponding text.
104
  * **Answers:** Correct answer to the question, represented by the key of the correct option (e.g., "C").
105
 
 
 
106
  ## Data Instance Example
107
 
108
  ```json
@@ -161,11 +163,15 @@ The dataset is derived from a combination of resources, including:
161
  * **Release Date:** December 27, 2024
162
  * **Contact:** We welcome any feedback or corrections to improve the dataset quality.
163
 
164
- **License**
 
 
165
 
166
  This dataset is licensed under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
167
 
168
- **Getting Started**
 
 
169
 
170
  You can access the dataset on Hugging Face using the following commands:
171
 
@@ -180,6 +186,52 @@ dataset = load_dataset("Supa-AI/STEM-en-ms", name="data_en")
180
  # For Malay data
181
  dataset = load_dataset("Supa-AI/STEM-en-ms", name="data_ms")
182
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
183
 
184
  **Contributors**
185
  - [Gele](https://huggingface.co/Geleliong)
 
52
  num_bytes: 34663548
53
  num_examples: 614
54
  download_size: 69119656
55
+
56
  dataset_size: 69327096.0
57
  tags:
58
  - mathematics
 
75
  - en
76
  - ms
77
  ---
 
78
 
79
+ # **A Bilingual Dataset for Evaluating Reasoning Skills in STEM Subjects**
80
 
81
  This dataset provides a comprehensive evaluation set for tasks assessing reasoning skills in Science, Technology, Engineering, and Mathematics (STEM) subjects. It features questions in both English and Malay, catering to a diverse audience.
82
 
 
103
  * **Options:** Possible answer choices for the question, with keys (e.g., "A", "B", "C", "D") and corresponding text.
104
  * **Answers:** Correct answer to the question, represented by the key of the correct option (e.g., "C").
105
 
106
+ ---
107
+
108
  ## Data Instance Example
109
 
110
  ```json
 
163
  * **Release Date:** December 27, 2024
164
  * **Contact:** We welcome any feedback or corrections to improve the dataset quality.
165
 
166
+ ---
167
+
168
+ # License
169
 
170
  This dataset is licensed under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
171
 
172
+ ---
173
+
174
+ # Getting Started
175
 
176
  You can access the dataset on Hugging Face using the following commands:
177
 
 
186
  # For Malay data
187
  dataset = load_dataset("Supa-AI/STEM-en-ms", name="data_ms")
188
  ```
189
+ ---
190
+
191
+ # Bilingual STEM Dataset LLM Leaderboard
192
+
193
+ This document summarizes the evaluation results for various language models based on **5-shot** and **First Token Accuracy**. The evaluation was conducted across four configurations:
194
+
195
+ - **en_withfigures**: English prompts including figures.
196
+ - **en_withoutfigures**: English prompts excluding figures.
197
+ - **ms_withfigures**: Malay prompts including figures.
198
+ - **ms_withoutfigures**: Malay prompts excluding figures.
199
+
200
+ ---
201
+
202
+ ## Results Table
203
+
204
+ | **Model** | **en_withfigures** | **en_withoutfigures** | **ms_withfigures** | **ms_withoutfigures** |
205
+ |---------------------------------|--------------------|-----------------------|--------------------|-----------------------|
206
+ | **gemini-2.0-flash-exp** | __63.70%__ | **75.16%** | __63.36%__ | **75.47%** |
207
+ | **gemini-1.5-flash** | __49.66%__ | __67.39%__ | __50.00%__ | __64.28%__ |
208
+ | **Qwen/Qwen2-VL-72B-Instruct** | __58.22%__ | __69.25%__ | __57.53%__ | __63.66%__ |
209
+ | **gpt-4o** | __47.95%__ | __66.15%__ | __50.00%__ | __68.01%__ |
210
+ | **gpt-4o-mini** | __41.10%__ | __55.90%__ | __38.36%__ | __52.80%__ |
211
+ | **pixtral-large-2411** | __42.81%__ | __64.29%__ | __35.27%__ | __60.87%__ |
212
+ | **pixtral-12b-2409** | __24.66%__ | __48.45%__ | __24.66%__ | __39.13%__ |
213
+ | **DeepSeek-V3** | None | **79.19%** | None | **76.40%** |
214
+ | **Qwen2.5-72B-Instruct** | None | __74.53%__ | None | __72.98%__ |
215
+ | **Meta-Llama-3.3-70B-Instruct** | None | __67.08%__ | None | __58.07%__ |
216
+ | **Llama-3.2-90B-Vision-Instruct** | None | __65.22%__ | None | __58.07%__ |
217
+ | **sail/Sailor2-20B-Chat** | None | __66.46%__ | None | __61.68%__ |
218
+ | **mallam-small** | None | __61.49%__ | None | __55.28%__ |
219
+ | **mistral-large-latest** | None | __60.56%__ | None | __53.42%__ |
220
+ | **google/gemma-2-27b-it** | None | __58.07%__ | None | __57.76%__ |
221
+ | **SeaLLMs-v3-7B-Chat** | None | __50.93%__ | None | __45.96%__ |
222
+
223
+ ---
224
+
225
+ ## Notes
226
+
227
+ In the repository, there is an `eval.py` script that can be used to run the evaluation for any other LLM.
228
+
229
+
230
+ - The evaluation results are based on the specific dataset and methodology employed.
231
+ - The "First Token Accuracy" metric emphasizes the accuracy of predicting the initial token correctly.
232
+ - Further analysis might be needed to determine the models' suitability for specific tasks.
233
+
234
+ ---
235
 
236
  **Contributors**
237
  - [Gele](https://huggingface.co/Geleliong)