Barcenas Llama3 8b ORPO

Model trained with the novel new ORPO method, based on the recent Llama 3 8b, specifically: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct

The model was trained with the dataset: reciperesearch/dolphin-sft-v0.1-preference which uses Dolphin data with GPT 4 to improve its conversation sections.

Made with ❀️ in Guadalupe, Nuevo Leon, Mexico πŸ‡²πŸ‡½

Downloads last month
13,304
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Danielbrdz/Barcenas-Llama3-8b-ORPO

Merges
20 models
Quantizations
3 models

Spaces using Danielbrdz/Barcenas-Llama3-8b-ORPO 7