Details
This is an experimental merge I plan to use for future projects, it shows promising results from my limited testing. Further testing should probably be done! I just don't have the time, nor compute right now.
Configuration
The following YAML configuration was used to produce this model:
models:
- model: unsloth/Llama-3.2-3B
parameters:
weight: 0.5
density: 0.7
- model: unsloth/Llama-3.2-3B-Instruct
parameters:
weight: 0.5
density: 0.6
merge_method: ties
base_model: unsloth/Llama-3.2-3B
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
tokenizer_source: unsloth/Llama-3.2-3B
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.