Update README.md
Browse files
README.md
CHANGED
@@ -1,58 +1,69 @@
|
|
1 |
-
---
|
2 |
-
license: llama3.1
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
-
|
11 |
-
|
12 |
-
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
##
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3.1
|
3 |
+
datasets:
|
4 |
+
- agentlans/crash-course
|
5 |
+
base_model:
|
6 |
+
- DreadPoor/LemonP-8B-Model_Stock
|
7 |
+
- Youlln/1PARAMMYL-8B-ModelStock
|
8 |
+
- jaspionjader/f-2-8b
|
9 |
+
- Etherll/SuperHermes
|
10 |
+
- meta-llama/Llama-3.1-8B
|
11 |
+
tags:
|
12 |
+
- merge
|
13 |
+
- mergekit
|
14 |
+
---
|
15 |
+
# Llama 3.1 Daredevilish
|
16 |
+
|
17 |
+
This model is an experimental Llama 3.1-based merge, inspired by the approach used in [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B). It combines top-performing MMLU-Pro models using the 8.03 billion parameter Llama architecture from the Open LLM Leaderboard as of January 21, 2025.
|
18 |
+
|
19 |
+
## Model Details
|
20 |
+
|
21 |
+
- **Architecture:** Llama 3.1 (8.03B parameters)
|
22 |
+
- **Training:** Merged from top MMLU-Pro models, with additional SFT
|
23 |
+
- **Release Date:** January 21, 2025
|
24 |
+
|
25 |
+
## Key Features
|
26 |
+
|
27 |
+
1. **Merged Architecture:** Combines high-performing MMLU-Pro models to enhance overall capabilities.
|
28 |
+
2. **Llama 3 Compatibility:** Additional Supervised Fine-Tuning (SFT) ensures adherence to Llama 3 prompt format.
|
29 |
+
3. **SFT Dataset:** [agentlans/crash-course](https://huggingface.co/datasets/agentlans/crash-course) dataset (1200 row configuration) for supervised fine-tuning in [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
|
30 |
+
4. **Fine-Tuning Approach:**
|
31 |
+
- 1 epoch training
|
32 |
+
- Rank 4 LoRA
|
33 |
+
- Alpha = 4
|
34 |
+
- rslora
|
35 |
+
|
36 |
+
## Merge Configuration
|
37 |
+
|
38 |
+
The model was created using [mergekit](https://github.com/arcee-ai/mergekit) with the following merge configuration:
|
39 |
+
|
40 |
+
```yaml
|
41 |
+
models:
|
42 |
+
- model: DreadPoor/LemonP-8B-Model_Stock
|
43 |
+
parameters:
|
44 |
+
density: 0.6
|
45 |
+
weight: 0.16
|
46 |
+
- model: Youlln/1PARAMMYL-8B-ModelStock
|
47 |
+
parameters:
|
48 |
+
density: 0.6
|
49 |
+
weight: 0.13
|
50 |
+
- model: jaspionjader/f-2-8b
|
51 |
+
parameters:
|
52 |
+
density: 0.6
|
53 |
+
weight: 0.10
|
54 |
+
- model: Etherll/SuperHermes
|
55 |
+
parameters:
|
56 |
+
density: 0.6
|
57 |
+
weight: 0.08
|
58 |
+
merge_method: dare_ties
|
59 |
+
base_model: meta-llama/Llama-3.1-8B
|
60 |
+
dtype: bfloat16
|
61 |
+
```
|
62 |
+
|
63 |
+
## Usage and Limitations
|
64 |
+
|
65 |
+
This experimental model is designed for research and development purposes. Users should be aware of potential biases and limitations inherent in language models. Always validate outputs and use the model responsibly.
|
66 |
+
|
67 |
+
## Future Work
|
68 |
+
|
69 |
+
Further evaluation and fine-tuning may be necessary to optimize performance across various tasks. Researchers are encouraged to build upon this experimental merge to advance the capabilities of Llama-based models.
|