barandinho's picture
Update README.md
afa0aed verified
metadata
license: mit
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE
language:
  - tr
pipeline_tag: text-generation
tags:
  - phi
  - nlp
  - instruction-tuning
  - turkish
  - chat
  - conversational
inference:
  parameters:
    temperature: 0.7
widget:
  - messages:
      - role: user
        content: Internet'i nasıl açıklayabilirim?
library_name: transformers

Phi-4 Turkish Instruction-Tuned Model

This model is a fine-tuned version of Microsoft's Phi-4 model for Turkish instruction-following tasks. It was trained on a 55,000-sample Turkish instruction dataset, making it well-suited for generating helpful and coherent responses in Turkish.

Model Summary

Developers Baran Bingöl (Hugging Face: barandinho)
Base Model microsoft/phi-4
Architecture 14B parameters, dense decoder-only Transformer
Training Data 55K Turkish instruction samples
Context Length 16K tokens
License MIT (License Link)

Intended Use

Primary Use Cases

  • Turkish conversational AI systems
  • Chatbots and virtual assistants
  • Educational tools for Turkish users
  • General-purpose text generation in Turkish

Out-of-Scope Use Cases

  • High-risk domains (medical, legal, financial advice) without proper evaluation
  • Use in sensitive or safety-critical systems without safeguards

Usage

Input Formats

Given the nature of the training data, phi-4 is best suited for prompts using the chat format as follows:

<|im_start|>system<|im_sep|>
Sen yardımsever bir yapay zekasın.<|im_end|>
<|im_start|>user<|im_sep|>
Kuantum hesaplama neden önemlidir?<|im_end|>
<|im_start|>assistant<|im_sep|>

With transformers

Below code uses 4-bit quantization (INT4) to run the model more efficiently with lower memory usage, which is especially useful for environments with limited GPU memory like Google Colab. Keep in mind that the model will take some time to download initially.

Check this notebook for interactive usage of the model.

import os
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline
import torch

model_name = "barandinho/phi4-turkish-instruct"

quant_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_use_double_quant=True)

os.makedirs("offload", exist_ok=True)

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto",
    torch_dtype=torch.float16,
    quantization_config=quant_config,
    offload_folder="offload"
)

messages = [ 
    {"role": "system", "content": "Sen yardımsever bir yapay zekasın."}, 
    {"role": "user", "content": "Kuantum hesaplama neden önemlidir, basit terimlerle açıklayabilir misin?"}, 
] 

pipe = pipeline( 
    "text-generation", 
    model=model, 
    tokenizer=tokenizer
) 

generation_args = { 
    "max_new_tokens": 500, 
    "return_full_text": False, 
    "temperature": 0.0, 
    "do_sample": False, 
} 

output = pipe(messages, **generation_args) 
print(output[0]['generated_text'])