--- base_model: - Undi95/Llama-3-LewdPlay-8B - Blackroot/Llama-3-LongStory-LORA - nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K - Blackroot/Llama-3-LongStory-LORA - MrRobotoAI/MrRoboto-BASE-v3-8b-64k - gradientai/Llama-3-8B-Instruct-Gradient-4194k - Rupesh2/OrpoLlama-3-8B-uncensored - Blackroot/Llama-3-LongStory-LORA - SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA - Blackroot/Llama-3-LongStory-LORA - MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k - MrRobotoAI/MrRoboto-BASE-v1-8b-64k - WeMake/Llama-3-8B-Instruct-V41-1048k - MrRobotoAI/Llama-3-8B-Uncensored-test1 - Blackroot/Llama-3-LongStory-LORA library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [Undi95/Llama-3-LewdPlay-8B](https://huggingface.co/Undi95/Llama-3-LewdPlay-8B) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K](https://huggingface.co/nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [MrRobotoAI/MrRoboto-BASE-v3-8b-64k](https://huggingface.co/MrRobotoAI/MrRoboto-BASE-v3-8b-64k) * [gradientai/Llama-3-8B-Instruct-Gradient-4194k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-4194k) * [Rupesh2/OrpoLlama-3-8B-uncensored](https://huggingface.co/Rupesh2/OrpoLlama-3-8B-uncensored) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k](https://huggingface.co/MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k) * [MrRobotoAI/MrRoboto-BASE-v1-8b-64k](https://huggingface.co/MrRobotoAI/MrRoboto-BASE-v1-8b-64k) * [WeMake/Llama-3-8B-Instruct-V41-1048k](https://huggingface.co/WeMake/Llama-3-8B-Instruct-V41-1048k) * [MrRobotoAI/Llama-3-8B-Uncensored-test1](https://huggingface.co/MrRobotoAI/Llama-3-8B-Uncensored-test1) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: gradientai/Llama-3-8B-Instruct-Gradient-4194k - model: WeMake/Llama-3-8B-Instruct-V41-1048k - model: nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K+Blackroot/Llama-3-LongStory-LORA - model: Rupesh2/OrpoLlama-3-8B-uncensored+Blackroot/Llama-3-LongStory-LORA - model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA+Blackroot/Llama-3-LongStory-LORA - model: Undi95/Llama-3-LewdPlay-8B+Blackroot/Llama-3-LongStory-LORA - model: MrRobotoAI/Llama-3-8B-Uncensored-test1+Blackroot/Llama-3-LongStory-LORA - model: MrRobotoAI/MrRoboto-BASE-v3-8b-64k - model: MrRobotoAI/MrRoboto-BASE-v2.1-8b-64k - model: MrRobotoAI/MrRoboto-BASE-v1-8b-64k parameters: weight: 1.0 merge_method: linear dtype: float16 ```