File size: 3,133 Bytes
b579c50 5336709 84cd32d 5336709 e452327 5336709 fdd7c76 5336709 fdd7c76 5336709 5b4e485 5336709 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
language_model: true
license: apache-2.0
tags:
- text-generation
- language-modeling
- Multilingual
- pytorch
- transformers
datasets:
- wikimedia/wikipedia
metrics:
- cross_entropy_loss
language:
- ary
---
# Darija-GPT: Small Multilingual Language Model (Darija Arabic)
## Model Description
This is a small multilingual language model based on a Transformer architecture (GPT-like). It is trained from scratch on a subset of Wikipedia data in the **ary** language for demonstration and experimentation.
### Architecture
- Transformer-based language model (Decoder-only).
- Reduced model dimensions (`n_embd=768`, `n_head=12`, `n_layer=12`) for faster training and smaller model size, making it suitable for resource-constrained environments.
- Uses Byte-Pair Encoding (BPE) tokenizer trained on the same Wikipedia data.
### Training Data
- Trained on a Wikipedia subset in the following language:
- ary
- The dataset is prepared and encoded to be efficient for training smaller models.
### Limitations
- **Small Model:** Parameter count is limited to approximately 30 million, resulting in reduced capacity compared to larger models.
- **Limited Training Data:** Trained on a subset of Wikipedia, which is relatively small compared to massive datasets used for state-of-the-art models.
- **Not State-of-the-Art:** Performance is not expected to be cutting-edge due to size and data limitations.
- **Potential Biases:** May exhibit biases from the Wikipedia training data and may not generalize perfectly to all Darija dialects or real-world text.
## Intended Use
- Primarily for **research and educational purposes**.
- Demonstrating **language modeling in ary**.
- As a **starting point** for further experimentation in low-resource NLP, model compression, or fine-tuning on specific Darija tasks.
- For **non-commercial use** only.
## How to Use
You can use this model with the `transformers` library from Hugging Face. Make sure you have `transformers` installed (`pip install transformers`).
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Duino/Darija-GPT")
model = AutoModelForCausalLM.from_pretrained("Duino/Darija-GPT")
prompt_text = "هذا نموذج لغوي صغير" # Example prompt in Arabic/Darija
input_ids = tokenizer.encode(prompt_text, return_tensors="pt").to(model.device)
# Generate text (adjust max_length, temperature, top_p as needed)
output = model.generate(input_ids, max_new_tokens=50, temperature=0.9, top_p=0.9)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print("Prompt:", prompt_text)
print("Generated text:", generated_text)
```
## Training Plot
![Training Plot](plots/training_plot.png)
This plot shows the training and validation loss curves over epochs.
## Intended Use
This model is primarily intended for research and educational purposes to demonstrate language modeling, especially in low-resource languages like Darija Arabic.
## Limitations
Please be aware of the limitations due to the small model size and limited training data, as detailed in the Model Description.
|