File size: 2,946 Bytes
241a7fe d0d937a 241a7fe 0318f2a d0d937a 0318f2a ab7a9c5 241a7fe 3622003 241a7fe d0d937a 241a7fe d0d937a 241a7fe ab7a9c5 241a7fe ab7a9c5 241a7fe ab7a9c5 241a7fe b31dacf 241a7fe ab7a9c5 241a7fe ab7a9c5 241a7fe 0318f2a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nllb-200-distilled-600M-OpenHQ-GL-EN
results: []
datasets:
- juanjucm/OpenHQ-SpeechT-GL-EN
language:
- gl
- en
---
# nllb-200-distilled-600M-OpenHQ-GL-EN
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on [juanjucm/OpenHQ-SpeechT-GL-EN](https://huggingface.co/datasets/juanjucm/OpenHQ-SpeechT-GL-EN) datasetfor **Galician-to-Englis Machine Translation** task. It takes Galician texts as input and generates the correspondant English translation.
This Machine Translation model, was developed to be the second stage of a Speech Translation cascade system for transcribing and translating Galician audios into English texts. [This STT model](https://huggingface.co/juanjucm/whisper-large-v3-turbo-OpenHQ-GL) can be used as a first step to transcribe Galician audio into text. After that, this MT model can be applied over the generated Galician transcriptions to get English text translations.
The motivation behind this work is to increase the visibility of the Galician language, making it more accessible for non-Galician speakers to understand and engage with Galician audio content.
This model was developed during a 3-week Speech Translation workshop organised by [Yasmin Moslem](https://huggingface.co/ymoslem).
### Performance and training details
Baseline model achieved a BLEU score of **51.32** on the evaluation dataset.
After fine-tuning, it achieves the following results on the evaluation set:
- Loss: 0.0122
- **BLEU: 73.6259**
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
We used [BLEU Score](https://en.wikipedia.org/wiki/BLEU) as our reference translation metric for selecting the best checkpoint after training.
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 14.2627 | 1.0 | 600 | 3.7799 | 61.8432 |
| 6.0125 | 2.0 | 1200 | 0.5403 | 66.7094 |
| 1.1534 | 3.0 | 1800 | 0.0243 | 69.1604 |
| 0.0748 | 4.0 | 2400 | 0.0147 | 70.7523 |
| 0.0125 | 5.0 | 3000 | 0.0131 | 73.1040 |
| 0.0095 | 6.0 | 3600 | 0.0126 | 73.2385 |
| 0.0081 | 7.0 | 4200 | 0.0122 | 73.8670 |
| 0.0072 | 8.0 | 4800 | 0.0122 | 73.6259 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0 |