agentlans commited on
Commit
7bd18fd
·
verified ·
1 Parent(s): 43e0c3d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -58
README.md CHANGED
@@ -1,58 +1,69 @@
1
- ---
2
- license: llama3.1
3
- ---
4
- # Llama 3.1 Devilish
5
-
6
- This model is an experimental Llama 3.1-based merge, inspired by the approach used in [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B). It combines top-performing MMLU-Pro models using the 8.03 billion parameter Llama architecture from the Open LLM Leaderboard as of January 21, 2025.
7
-
8
- ## Model Details
9
-
10
- - **Architecture:** Llama 3.1 (8.03B parameters)
11
- - **Training:** Merged from top MMLU-Pro models, with additional SFT
12
- - **Release Date:** January 21, 2025
13
-
14
- ## Key Features
15
-
16
- 1. **Merged Architecture:** Combines high-performing MMLU-Pro models to enhance overall capabilities.
17
- 2. **Llama 3 Compatibility:** Additional Supervised Fine-Tuning (SFT) ensures adherence to Llama 3 prompt format.
18
- 3. **SFT Dataset:** [agentlans/crash-course](https://huggingface.co/datasets/agentlans/crash-course) dataset (1200 row configuration) for supervised fine-tuning in [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
19
- 4. **Fine-Tuning Approach:**
20
- - 1 epoch training
21
- - Rank 4 LoRA
22
- - Alpha = 4
23
- - rslora
24
-
25
- ## Merge Configuration
26
-
27
- The model was created using [mergekit](https://github.com/arcee-ai/mergekit) with the following merge configuration:
28
-
29
- ```yaml
30
- models:
31
- - model: DreadPoor/LemonP-8B-Model_Stock
32
- parameters:
33
- density: 0.6
34
- weight: 0.16
35
- - model: Youlln/1PARAMMYL-8B-ModelStock
36
- parameters:
37
- density: 0.6
38
- weight: 0.13
39
- - model: jaspionjader/f-2-8b
40
- parameters:
41
- density: 0.6
42
- weight: 0.10
43
- - model: Etherll/SuperHermes
44
- parameters:
45
- density: 0.6
46
- weight: 0.08
47
- merge_method: dare_ties
48
- base_model: meta-llama/Llama-3.1-8B
49
- dtype: bfloat16
50
- ```
51
-
52
- ## Usage and Limitations
53
-
54
- This experimental model is designed for research and development purposes. Users should be aware of potential biases and limitations inherent in language models. Always validate outputs and use the model responsibly.
55
-
56
- ## Future Work
57
-
58
- Further evaluation and fine-tuning may be necessary to optimize performance across various tasks. Researchers are encouraged to build upon this experimental merge to advance the capabilities of Llama-based models.
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ datasets:
4
+ - agentlans/crash-course
5
+ base_model:
6
+ - DreadPoor/LemonP-8B-Model_Stock
7
+ - Youlln/1PARAMMYL-8B-ModelStock
8
+ - jaspionjader/f-2-8b
9
+ - Etherll/SuperHermes
10
+ - meta-llama/Llama-3.1-8B
11
+ tags:
12
+ - merge
13
+ - mergekit
14
+ ---
15
+ # Llama 3.1 Daredevilish
16
+
17
+ This model is an experimental Llama 3.1-based merge, inspired by the approach used in [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B). It combines top-performing MMLU-Pro models using the 8.03 billion parameter Llama architecture from the Open LLM Leaderboard as of January 21, 2025.
18
+
19
+ ## Model Details
20
+
21
+ - **Architecture:** Llama 3.1 (8.03B parameters)
22
+ - **Training:** Merged from top MMLU-Pro models, with additional SFT
23
+ - **Release Date:** January 21, 2025
24
+
25
+ ## Key Features
26
+
27
+ 1. **Merged Architecture:** Combines high-performing MMLU-Pro models to enhance overall capabilities.
28
+ 2. **Llama 3 Compatibility:** Additional Supervised Fine-Tuning (SFT) ensures adherence to Llama 3 prompt format.
29
+ 3. **SFT Dataset:** [agentlans/crash-course](https://huggingface.co/datasets/agentlans/crash-course) dataset (1200 row configuration) for supervised fine-tuning in [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
30
+ 4. **Fine-Tuning Approach:**
31
+ - 1 epoch training
32
+ - Rank 4 LoRA
33
+ - Alpha = 4
34
+ - rslora
35
+
36
+ ## Merge Configuration
37
+
38
+ The model was created using [mergekit](https://github.com/arcee-ai/mergekit) with the following merge configuration:
39
+
40
+ ```yaml
41
+ models:
42
+ - model: DreadPoor/LemonP-8B-Model_Stock
43
+ parameters:
44
+ density: 0.6
45
+ weight: 0.16
46
+ - model: Youlln/1PARAMMYL-8B-ModelStock
47
+ parameters:
48
+ density: 0.6
49
+ weight: 0.13
50
+ - model: jaspionjader/f-2-8b
51
+ parameters:
52
+ density: 0.6
53
+ weight: 0.10
54
+ - model: Etherll/SuperHermes
55
+ parameters:
56
+ density: 0.6
57
+ weight: 0.08
58
+ merge_method: dare_ties
59
+ base_model: meta-llama/Llama-3.1-8B
60
+ dtype: bfloat16
61
+ ```
62
+
63
+ ## Usage and Limitations
64
+
65
+ This experimental model is designed for research and development purposes. Users should be aware of potential biases and limitations inherent in language models. Always validate outputs and use the model responsibly.
66
+
67
+ ## Future Work
68
+
69
+ Further evaluation and fine-tuning may be necessary to optimize performance across various tasks. Researchers are encouraged to build upon this experimental merge to advance the capabilities of Llama-based models.