dataset = load_dataset("NovaSky-AI/Sky-T1_data_17k", split="train")
dataset2 = load_dataset("Nitral-AI/Discover-Intstruct-6k-Distilled-R1-70b-ShareGPT", split="train")
dataset3 = load_dataset("Nitral-Archive/RP_Alignment-ShareGPT", split="train")
dataset4 = load_dataset("alexandreteles/AlpacaToxicQA_ShareGPT", split="train")

Uploaded model

  • Developed by: bunnycore
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for bunnycore/Llama-3.2-3b-RRP-lora_model

Merges
2 models

Dataset used to train bunnycore/Llama-3.2-3b-RRP-lora_model