--- base_model: - Kaoeiri/MS-Quadrosiac-2409-22B - DigitalSouls/BlackSheep-DigitalSoul-22B - Kaoeiri/MS-Inky-2409-22B - TheDrummer/Cydonia-22B-v1.1 - anthracite-org/magnum-v4-22b - Darkknight535/MS-Moonlight-22B-v3 - Kaoeiri/MS_Moingooistral-2409-22B - hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit - Envoid/Mistral-Small-NovusKyver - Jellywibble/MistralSmall1500CTXDummy - ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 - Kaoeiri/MS_a-coolyte-2409-22B - Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B - Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2-Cydonia-vXXX-22B-5 - unsloth/Mistral-Small-Instruct-2409 - crestf411/MS-sunfall-v0.7.0 - Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small - TroyDoesAI/BlackSheep-MermaidMistral-22B - InferenceIllusionist/SorcererLM-22B library_name: transformers tags: - mergekit - merge - not-for-all-audiences license: cc-by-nc-nd-4.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Mistral-Small-Instruct-2409](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) as a base. ### Models Merged The following models were included in the merge: * [Kaoeiri/MS-Quadrosiac-2409-22B](https://huggingface.co/Kaoeiri/MS-Quadrosiac-2409-22B) * [DigitalSouls/BlackSheep-DigitalSoul-22B](https://huggingface.co/DigitalSouls/BlackSheep-DigitalSoul-22B) * [Kaoeiri/MS-Inky-2409-22B](https://huggingface.co/Kaoeiri/MS-Inky-2409-22B) * [TheDrummer/Cydonia-22B-v1.1](https://huggingface.co/TheDrummer/Cydonia-22B-v1.1) * [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) * [Darkknight535/MS-Moonlight-22B-v3](https://huggingface.co/Darkknight535/MS-Moonlight-22B-v3) * [Kaoeiri/MS_Moingooistral-2409-22B](https://huggingface.co/Kaoeiri/MS_Moingooistral-2409-22B) * [hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit](https://huggingface.co/hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit) * [Envoid/Mistral-Small-NovusKyver](https://huggingface.co/Envoid/Mistral-Small-NovusKyver) * [Jellywibble/MistralSmall1500CTXDummy](https://huggingface.co/Jellywibble/MistralSmall1500CTXDummy) * [ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) * [Kaoeiri/MS_a-coolyte-2409-22B](https://huggingface.co/Kaoeiri/MS_a-coolyte-2409-22B) * [Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B) * [Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2-Cydonia-vXXX-22B-5](https://huggingface.co/Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2-Cydonia-vXXX-22B-5) * [crestf411/MS-sunfall-v0.7.0](https://huggingface.co/crestf411/MS-sunfall-v0.7.0) * [Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small](https://huggingface.co/Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small) * [TroyDoesAI/BlackSheep-MermaidMistral-22B](https://huggingface.co/TroyDoesAI/BlackSheep-MermaidMistral-22B) * [InferenceIllusionist/SorcererLM-22B](https://huggingface.co/InferenceIllusionist/SorcererLM-22B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: # Core Fiction and Character Detail Models (Increased Precision) - model: Kaoeiri/MS_Moingooistral-2409-22B # Monster fiction core parameters: weight: 0.40 # Increased for better monster/character detail density: 1.30 # Increased for richer character descriptions - model: Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2-Cydonia-vXXX-22B-5 # Main writing engine parameters: weight: 1.0 # Maximized for core writing density: 0.85 # Slightly increased for deeper character development - model: anthracite-org/magnum-v4-22b # Added for writing recap and precision parameters: weight: 0.95 density: 0.84 # World Building & Character Interaction - model: Kaoeiri/MS-Inky-2409-22B # Descriptive world dynamics parameters: weight: 0.45 # Increased for richer character environments density: 0.82 # Increased for better world-character integration - model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small # Rich interaction parameters: weight: 0.42 # Increased for deeper character interactions density: 0.78 # Character Development Core - model: DigitalSouls/BlackSheep-DigitalSoul-22B # Combat and conflict parameters: weight: 0.30 # Increased for better character conflict density: 0.75 # Magical Elements (Rebalanced NovusKyver Sister) - model: InferenceIllusionist/SorcererLM-22B parameters: weight: 0.15 # Increased for magical character traits density: 0.78 # Increased for better integration # Secondary Character Enhancement - model: TheDrummer/Cydonia-22B-v1.1 parameters: weight: 0.15 # Slight increase for character depth density: 0.68 - model: crestf411/MS-sunfall-v0.7.0 parameters: weight: 0.18 # Increased for writing precision density: 0.72 - model: Kaoeiri/MS_a-coolyte-2409-22B parameters: weight: 0.22 # Increased for fictional character detail density: 0.73 # Character Personality Development - model: Kaoeiri/MS-Quadrosiac-2409-22B parameters: weight: 0.18 density: 0.73 # Enhanced Story and Character Building - model: hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit parameters: weight: 0.25 # Increased for better character narrative density: 0.72 - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 parameters: weight: 0.15 # Increased for character roleplay density: 0.65 - model: Darkknight535/MS-Moonlight-22B-v3 parameters: weight: 0.25 # Increased for character detail density: 0.65 # Reduced weight but maintained for diversity - model: Jellywibble/MistralSmall1500CTXDummy parameters: weight: 0.12 density: 0.62 # Cultural and Character Depth - model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B parameters: weight: 0.22 # Increased for news and cultural character traits density: 0.62 # Rebalanced NovusKyver (Modified to maintain benefits while reducing instruction avoidance) - model: Envoid/Mistral-Small-NovusKyver parameters: weight: 0.20 # Reduced to minimize instruction avoidance density: 0.74 # Increased for better integration - model: TroyDoesAI/BlackSheep-MermaidMistral-22B parameters: weight: 0.25 # Increased for character personality density: 0.73 merge_method: dare_ties base_model: unsloth/Mistral-Small-Instruct-2409 parameters: density: 0.95 # Increased for maximum character detail epsilon: 0.035 # Reduced for more consistent character behavior lambda: 1.50 # Increased to enhance character creativity dtype: bfloat16 tokenizer_source: union ```