Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
language_model: true
|
3 |
+
license: apache-2.0
|
4 |
+
tags:
|
5 |
+
- text-generation
|
6 |
+
- language-modeling
|
7 |
+
- Multilingual
|
8 |
+
- pytorch
|
9 |
+
- transformers
|
10 |
+
|
11 |
+
datasets:
|
12 |
+
- wikimedia/wikipedia
|
13 |
+
metrics:
|
14 |
+
- cross_entropy_loss
|
15 |
+
language:
|
16 |
+
- ary
|
17 |
+
|
18 |
+
# Darija-GPT: Small Multilingual Language Model (Darija Arabic)
|
19 |
+
|
20 |
+
## Model Description
|
21 |
+
|
22 |
+
This is a small multilingual language model based on a Transformer architecture (GPT-like). It is trained from scratch on a subset of Wikipedia data in the **ary** language for demonstration and experimentation.
|
23 |
+
|
24 |
+
### Architecture
|
25 |
+
|
26 |
+
- Transformer-based language model (Decoder-only).
|
27 |
+
- Reduced model dimensions (`n_embd=768`, `n_head=12`, `n_layer=12`) for faster training and smaller model size, making it suitable for resource-constrained environments.
|
28 |
+
- Uses Byte-Pair Encoding (BPE) tokenizer trained on the same Wikipedia data.
|
29 |
+
|
30 |
+
### Training Data
|
31 |
+
|
32 |
+
- Trained on a Wikipedia subset in the following language:
|
33 |
+
- ary
|
34 |
+
- The dataset is prepared and encoded to be efficient for training smaller models.
|
35 |
+
|
36 |
+
### Limitations
|
37 |
+
|
38 |
+
- **Small Model:** Parameter count is limited to approximately 30 million, resulting in reduced capacity compared to larger models.
|
39 |
+
- **Limited Training Data:** Trained on a subset of Wikipedia, which is relatively small compared to massive datasets used for state-of-the-art models.
|
40 |
+
- **Not State-of-the-Art:** Performance is not expected to be cutting-edge due to size and data limitations.
|
41 |
+
- **Potential Biases:** May exhibit biases from the Wikipedia training data and may not generalize perfectly to all Darija dialects or real-world text.
|
42 |
+
|
43 |
+
## Intended Use
|
44 |
+
|
45 |
+
- Primarily for **research and educational purposes**.
|
46 |
+
- Demonstrating **language modeling in ary**.
|
47 |
+
- As a **starting point** for further experimentation in low-resource NLP, model compression, or fine-tuning on specific Darija tasks.
|
48 |
+
- For **non-commercial use** only.
|
49 |
+
|
50 |
+
## How to Use
|
51 |
+
|
52 |
+
```python
|
53 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
54 |
+
|
55 |
+
tokenizer = AutoTokenizer.from_pretrained("{HF_REPO_ID}") # Replace with your actual HF Hub Repo ID
|
56 |
+
model = AutoModelForCausalLM.from_pretrained("{HF_REPO_ID}") # Replace with your actual HF Hub Repo ID
|
57 |
+
|
58 |
+
# Example usage (text generation or language modeling code would go here)
|
59 |
+
```
|
60 |
+
|
61 |
+
## Training Plot
|
62 |
+
|
63 |
+
![Training Plot](plots/training_plot.png)
|
64 |
+
|
65 |
+
This plot shows the training and validation loss curves over epochs.
|
66 |
+
|
67 |
+
## Intended Use
|
68 |
+
|
69 |
+
This model is primarily intended for research and educational purposes to demonstrate language modeling, especially in low-resource languages like Darija Arabic.
|
70 |
+
|
71 |
+
## Limitations
|
72 |
+
|
73 |
+
Please be aware of the limitations due to the small model size and limited training data, as detailed in the Model Description.
|