Delta-Vector
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -45,7 +45,7 @@ Can I ask a question?<|im_end|>
|
|
45 |
|
46 |
## Support
|
47 |
|
48 |
-
## No longer needed as LCPP has merged support - just update
|
49 |
|
50 |
To run inference on this model, you'll need to use Aphrodite, vLLM or EXL 2/tabbyAPI, as llama.cpp hasn't yet merged the required pull request to fix the llama 3.1 rope_freqs issue with custom head dimensions.
|
51 |
|
@@ -170,6 +170,3 @@ special_tokens:
|
|
170 |
The training was done for 2 epochs. We used 2 x [RTX 6000s](https://store.nvidia.com/en-us/nvidia-rtx/products/nvidia-rtx-6000-ada-generation/) GPUs graciously provided by [Kubernetes_Bad](https://huggingface.co/kubernetes-bad) for the full-parameter fine-tuning of the model.
|
171 |
|
172 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
173 |
-
|
174 |
-
## Safety
|
175 |
-
...
|
|
|
45 |
|
46 |
## Support
|
47 |
|
48 |
+
## No longer needed as LCPP has merged support - just update.
|
49 |
|
50 |
To run inference on this model, you'll need to use Aphrodite, vLLM or EXL 2/tabbyAPI, as llama.cpp hasn't yet merged the required pull request to fix the llama 3.1 rope_freqs issue with custom head dimensions.
|
51 |
|
|
|
170 |
The training was done for 2 epochs. We used 2 x [RTX 6000s](https://store.nvidia.com/en-us/nvidia-rtx/products/nvidia-rtx-6000-ada-generation/) GPUs graciously provided by [Kubernetes_Bad](https://huggingface.co/kubernetes-bad) for the full-parameter fine-tuning of the model.
|
171 |
|
172 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
|
|
|
|
|