Model Card for alzheimer_llm_model

This model is a fine-tuned version of unsloth/mistral-7b-bnb-4bit. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "what is alzheimer's disease ?"
generator = pipeline("text-generation", model="safoinetl/llm_model", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

Visualize in Weights & Biases

This model was trained with SFT.

Framework versions

  • TRL: 0.12.1
  • Transformers: 4.46.2
  • Pytorch: 2.5.1+cu121
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou茅dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
10
Safetensors
Model size
7.24B params
Tensor type
FP16
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for safoinetl/llm_model

Finetuned
(470)
this model