--- license: apache-2.0 datasets: - totally-not-an-llm/EverythingLM-data-V2-sharegpt language: - en library_name: transformers --- Trained on 3 epochs of the `totally-not-an-llm/EverythingLM-data-V2-sharegpt` dataset. ``` ### HUMAN: {prompt} ### RESPONSE: ``` note: Changed a few of the finetuning parameters this time around. I have no idea if its any good but Feel free to give it! [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)