uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
bitnet_b1_58-xl - GGUF
|
11 |
+
- Model creator: https://huggingface.co/1bitLLM/
|
12 |
+
- Original model: https://huggingface.co/1bitLLM/bitnet_b1_58-xl/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [bitnet_b1_58-xl.Q2_K.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q2_K.gguf) | Q2_K | 0.05GB |
|
18 |
+
| [bitnet_b1_58-xl.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.IQ3_XS.gguf) | IQ3_XS | 0.05GB |
|
19 |
+
| [bitnet_b1_58-xl.IQ3_S.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.IQ3_S.gguf) | IQ3_S | 0.05GB |
|
20 |
+
| [bitnet_b1_58-xl.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q3_K_S.gguf) | Q3_K_S | 0.05GB |
|
21 |
+
| [bitnet_b1_58-xl.IQ3_M.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.IQ3_M.gguf) | IQ3_M | 0.05GB |
|
22 |
+
| [bitnet_b1_58-xl.Q3_K.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q3_K.gguf) | Q3_K | 0.05GB |
|
23 |
+
| [bitnet_b1_58-xl.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q3_K_M.gguf) | Q3_K_M | 0.05GB |
|
24 |
+
| [bitnet_b1_58-xl.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q3_K_L.gguf) | Q3_K_L | 0.05GB |
|
25 |
+
| [bitnet_b1_58-xl.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.IQ4_XS.gguf) | IQ4_XS | 0.05GB |
|
26 |
+
| [bitnet_b1_58-xl.Q4_0.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q4_0.gguf) | Q4_0 | 0.05GB |
|
27 |
+
| [bitnet_b1_58-xl.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.IQ4_NL.gguf) | IQ4_NL | 0.05GB |
|
28 |
+
| [bitnet_b1_58-xl.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q4_K_S.gguf) | Q4_K_S | 0.05GB |
|
29 |
+
| [bitnet_b1_58-xl.Q4_K.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q4_K.gguf) | Q4_K | 0.05GB |
|
30 |
+
| [bitnet_b1_58-xl.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q4_K_M.gguf) | Q4_K_M | 0.05GB |
|
31 |
+
| [bitnet_b1_58-xl.Q4_1.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q4_1.gguf) | Q4_1 | 0.05GB |
|
32 |
+
| [bitnet_b1_58-xl.Q5_0.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q5_0.gguf) | Q5_0 | 0.05GB |
|
33 |
+
| [bitnet_b1_58-xl.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q5_K_S.gguf) | Q5_K_S | 0.05GB |
|
34 |
+
| [bitnet_b1_58-xl.Q5_K.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q5_K.gguf) | Q5_K | 0.05GB |
|
35 |
+
| [bitnet_b1_58-xl.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q5_K_M.gguf) | Q5_K_M | 0.05GB |
|
36 |
+
| [bitnet_b1_58-xl.Q5_1.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q5_1.gguf) | Q5_1 | 0.05GB |
|
37 |
+
| [bitnet_b1_58-xl.Q6_K.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q6_K.gguf) | Q6_K | 0.05GB |
|
38 |
+
| [bitnet_b1_58-xl.Q8_0.gguf](https://huggingface.co/RichardErkhov/1bitLLM_-_bitnet_b1_58-xl-gguf/blob/main/bitnet_b1_58-xl.Q8_0.gguf) | Q8_0 | 0.07GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
Original model description:
|
44 |
+
---
|
45 |
+
license: mit
|
46 |
+
---
|
47 |
+
|
48 |
+
This is a reproduction of the <a href="https://arxiv.org/abs/2402.17764"> BitNet b1.58</a> paper. The models are trained with <a href="https://github.com/togethercomputer/RedPajama-Data">RedPajama dataset</a> for 100B tokens. The hypers, as well as two-stage LR and weight decay, are implemented as suggested in their following <a href="https://github.com/microsoft/unilm/blob/master/bitnet/The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf">paper</a>. All models are open-source in the <a href="https://huggingface.co/1bitLLM">repo</a>. We will train larger models and/or more tokens when resource is available.
|
49 |
+
|
50 |
+
## Results
|
51 |
+
PPL and zero-shot accuracy:
|
52 |
+
| Models | PPL| ARCe| ARCc| HS | BQ | OQ | PQ | WGe | Avg
|
53 |
+
|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
|
54 |
+
| FP16 700M (reported) | 12.33 | 54.7 | 23.0 | 37.0 | 60.0 | 20.2 | 68.9 | 54.8 | 45.5 |
|
55 |
+
| BitNet b1.58 700M (reported) | 12.87 | 51.8 | 21.4 | 35.1 | 58.2 | 20.0 | 68.1 | 55.2 | 44.3 |
|
56 |
+
| BitNet b1.58 700M (reproduced) | 12.78 | 51.4 | 21.8 | 35.0 | 59.6 | 20.6 | 67.5 | 55.4 | 44.5 |
|
57 |
+
| FP16 1.3B (reported) | 11.25 | 56.9 | 23.5 | 38.5 | 59.1 | 21.6 | 70.0 | 53.9 | 46.2
|
58 |
+
| BitNet b1.58 1.3B (reported) | 11.29 | 54.9 | 24.2 | 37.7 | 56.7 | 19.6 | 68.8 | 55.8 | 45.4 |
|
59 |
+
| BitNet b1.58 1.3B (reproduced) | 11.19 | 55.8 | 23.7 | 37.6 | 59.0 | 20.2 | 69.2 | 56.0 | 45.9
|
60 |
+
| FP16 3B (reported) | 10.04 | 62.1 | 25.6 | 43.3 | 61.8 | 24.6 | 72.1 | 58.2 | 49.7
|
61 |
+
| BitNet b1.58 3B (reported) | 9.91 | 61.4 | 28.3 | 42.9 | 61.5 | 26.6 | 71.5 | 59.3 | 50.2
|
62 |
+
| BitNet b1.58 3B (reproduced) | 9.88 | 60.9 | 28.0 | 42.3 | 58.3 | 26.0 | 71.4 | 60.3 | 49.6 |
|
63 |
+
|
64 |
+
The differences between the reported numbers and the reproduced results are possibly variances from the training data processing, seeds, or other random factors.
|
65 |
+
|
66 |
+
## Evaluation
|
67 |
+
The evaluation pipelines are from the paper authors. Here is the commands to run the evaluation:
|
68 |
+
```
|
69 |
+
pip install lm-eval==0.3.0
|
70 |
+
```
|
71 |
+
```
|
72 |
+
python eval_ppl.py --hf_path 1bitLLM/bitnet_b1_58-3B --seqlen 2048
|
73 |
+
```
|
74 |
+
```
|
75 |
+
python eval_task.py --hf_path 1bitLLM/bitnet_b1_58-3B \
|
76 |
+
--batch_size 1 \
|
77 |
+
--tasks \
|
78 |
+
--output_path result.json \
|
79 |
+
--num_fewshot 0 \
|
80 |
+
--ctx_size 2048
|
81 |
+
```
|
82 |
+
|
83 |
+
|