prithivMLmods commited on
Commit
eb41a6c
·
verified ·
1 Parent(s): 39a8852

Adding Evaluation Results

Browse files

This is an automated PR created with [this space](https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard)!

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

Please report any issues here: https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -9,6 +9,105 @@ library_name: transformers
9
  tags:
10
  - qwen-optimized-coder
11
  - viper🐍
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  ![coderx.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ncJZH_SSIpEr16oAq4qDF.png)
@@ -75,4 +174,18 @@ print(response)
75
  2. **Language-Specific Variability**: Performance may vary across supported languages, especially for low-resource languages.
76
  3. **Potential Error Accumulation**: Long-text generation can sometimes introduce inconsistencies over extended outputs.
77
  4. **Limited Real-World Awareness**: Knowledge is restricted to training data and may not reflect recent world events.
78
- 5. **Prompt Sensitivity**: Outputs can depend on the specificity and clarity of the input prompt.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  tags:
10
  - qwen-optimized-coder
11
  - viper🐍
12
+ model-index:
13
+ - name: Viper-Coder-v0.1
14
+ results:
15
+ - task:
16
+ type: text-generation
17
+ name: Text Generation
18
+ dataset:
19
+ name: IFEval (0-Shot)
20
+ type: wis-k/instruction-following-eval
21
+ split: train
22
+ args:
23
+ num_few_shot: 0
24
+ metrics:
25
+ - type: inst_level_strict_acc and prompt_level_strict_acc
26
+ value: 55.21
27
+ name: averaged accuracy
28
+ source:
29
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v0.1
30
+ name: Open LLM Leaderboard
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: BBH (3-Shot)
36
+ type: SaylorTwift/bbh
37
+ split: test
38
+ args:
39
+ num_few_shot: 3
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 44.63
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v0.1
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MATH Lvl 5 (4-Shot)
52
+ type: lighteval/MATH-Hard
53
+ split: test
54
+ args:
55
+ num_few_shot: 4
56
+ metrics:
57
+ - type: exact_match
58
+ value: 31.87
59
+ name: exact match
60
+ source:
61
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v0.1
62
+ name: Open LLM Leaderboard
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: GPQA (0-shot)
68
+ type: Idavidrein/gpqa
69
+ split: train
70
+ args:
71
+ num_few_shot: 0
72
+ metrics:
73
+ - type: acc_norm
74
+ value: 13.87
75
+ name: acc_norm
76
+ source:
77
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v0.1
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: MuSR (0-shot)
84
+ type: TAUR-Lab/MuSR
85
+ args:
86
+ num_few_shot: 0
87
+ metrics:
88
+ - type: acc_norm
89
+ value: 13.03
90
+ name: acc_norm
91
+ source:
92
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v0.1
93
+ name: Open LLM Leaderboard
94
+ - task:
95
+ type: text-generation
96
+ name: Text Generation
97
+ dataset:
98
+ name: MMLU-PRO (5-shot)
99
+ type: TIGER-Lab/MMLU-Pro
100
+ config: main
101
+ split: test
102
+ args:
103
+ num_few_shot: 5
104
+ metrics:
105
+ - type: acc
106
+ value: 32.53
107
+ name: accuracy
108
+ source:
109
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FViper-Coder-v0.1
110
+ name: Open LLM Leaderboard
111
  ---
112
 
113
  ![coderx.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ncJZH_SSIpEr16oAq4qDF.png)
 
174
  2. **Language-Specific Variability**: Performance may vary across supported languages, especially for low-resource languages.
175
  3. **Potential Error Accumulation**: Long-text generation can sometimes introduce inconsistencies over extended outputs.
176
  4. **Limited Real-World Awareness**: Knowledge is restricted to training data and may not reflect recent world events.
177
+ 5. **Prompt Sensitivity**: Outputs can depend on the specificity and clarity of the input prompt.
178
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
179
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Viper-Coder-v0.1-details)!
180
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FViper-Coder-v0.1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
181
+
182
+ | Metric |Value (%)|
183
+ |-------------------|--------:|
184
+ |**Average** | 31.86|
185
+ |IFEval (0-Shot) | 55.21|
186
+ |BBH (3-Shot) | 44.63|
187
+ |MATH Lvl 5 (4-Shot)| 31.87|
188
+ |GPQA (0-shot) | 13.87|
189
+ |MuSR (0-shot) | 13.03|
190
+ |MMLU-PRO (5-shot) | 32.53|
191
+