Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +119 -12
README.md CHANGED
@@ -1,21 +1,115 @@
1
  ---
 
 
 
2
  license: apache-2.0
3
- base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
4
  library_name: peft
5
  tags:
6
  - llama-factory
7
  - lora
 
8
  datasets:
9
- - Snit/french-conversation
10
- - Nekochu/novel17_train_alpaca_format
11
- - bofenghuang/vigogne
12
- - MaziyarPanahi/french_instruct_human_sharegpt
13
- - jpacifico/French-Alpaca-dataset-Instruct-110K
14
- - jpacifico/french-orca-dpo-pairs-revised
15
-
16
- language:
17
- - fr
18
- - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ---
20
 
21
  - Similar to the old [Nekochu/Llama-2-13B-fp16-french](https://huggingface.co/Nekochu/Llama-2-13B-fp16-french) with additional datasets.
@@ -148,4 +242,17 @@ SupportedContentEn explorant cette opposition fascinante entre la glace et le fe
148
 
149
  Note: Output by exl2-DPO. `QLoRA_french_sft` is more stable to avoid any gibberi like ""`harmonieuseassistant.scalablytyped`"".
150
 
151
- </details>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - fr
4
+ - en
5
  license: apache-2.0
 
6
  library_name: peft
7
  tags:
8
  - llama-factory
9
  - lora
10
+ base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
11
  datasets:
12
+ - Snit/french-conversation
13
+ - Nekochu/novel17_train_alpaca_format
14
+ - bofenghuang/vigogne
15
+ - MaziyarPanahi/french_instruct_human_sharegpt
16
+ - jpacifico/French-Alpaca-dataset-Instruct-110K
17
+ - jpacifico/french-orca-dpo-pairs-revised
18
+ model-index:
19
+ - name: Llama-3.1-8B-french-DPO
20
+ results:
21
+ - task:
22
+ type: text-generation
23
+ name: Text Generation
24
+ dataset:
25
+ name: IFEval (0-Shot)
26
+ type: HuggingFaceH4/ifeval
27
+ args:
28
+ num_few_shot: 0
29
+ metrics:
30
+ - type: inst_level_strict_acc and prompt_level_strict_acc
31
+ value: 46.56
32
+ name: strict accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nekochu/Llama-3.1-8B-french-DPO
35
+ name: Open LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: BBH (3-Shot)
41
+ type: BBH
42
+ args:
43
+ num_few_shot: 3
44
+ metrics:
45
+ - type: acc_norm
46
+ value: 30.03
47
+ name: normalized accuracy
48
+ source:
49
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nekochu/Llama-3.1-8B-french-DPO
50
+ name: Open LLM Leaderboard
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: MATH Lvl 5 (4-Shot)
56
+ type: hendrycks/competition_math
57
+ args:
58
+ num_few_shot: 4
59
+ metrics:
60
+ - type: exact_match
61
+ value: 4.08
62
+ name: exact match
63
+ source:
64
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nekochu/Llama-3.1-8B-french-DPO
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: GPQA (0-shot)
71
+ type: Idavidrein/gpqa
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: acc_norm
76
+ value: 5.48
77
+ name: acc_norm
78
+ source:
79
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nekochu/Llama-3.1-8B-french-DPO
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: MuSR (0-shot)
86
+ type: TAUR-Lab/MuSR
87
+ args:
88
+ num_few_shot: 0
89
+ metrics:
90
+ - type: acc_norm
91
+ value: 11.56
92
+ name: acc_norm
93
+ source:
94
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nekochu/Llama-3.1-8B-french-DPO
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: MMLU-PRO (5-shot)
101
+ type: TIGER-Lab/MMLU-Pro
102
+ config: main
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 26.82
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nekochu/Llama-3.1-8B-french-DPO
112
+ name: Open LLM Leaderboard
113
  ---
114
 
115
  - Similar to the old [Nekochu/Llama-2-13B-fp16-french](https://huggingface.co/Nekochu/Llama-2-13B-fp16-french) with additional datasets.
 
242
 
243
  Note: Output by exl2-DPO. `QLoRA_french_sft` is more stable to avoid any gibberi like ""`harmonieuseassistant.scalablytyped`"".
244
 
245
+ </details>
246
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
247
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Nekochu__Llama-3.1-8B-french-DPO)
248
+
249
+ | Metric |Value|
250
+ |-------------------|----:|
251
+ |Avg. |20.76|
252
+ |IFEval (0-Shot) |46.56|
253
+ |BBH (3-Shot) |30.03|
254
+ |MATH Lvl 5 (4-Shot)| 4.08|
255
+ |GPQA (0-shot) | 5.48|
256
+ |MuSR (0-shot) |11.56|
257
+ |MMLU-PRO (5-shot) |26.82|
258
+