wav2vec2-xls-r-akan-100-hours
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.7988
- Model Preparation Time: 0.0143
- Wer: 0.2968
- Cer: 0.0937
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Wer | Cer |
---|---|---|---|---|---|---|
11.1522 | 1.7331 | 500 | 2.7710 | 0.0143 | 1.0 | 1.0 |
2.0881 | 3.4662 | 1000 | 0.3882 | 0.0143 | 0.3401 | 0.1057 |
0.8886 | 5.1993 | 1500 | 0.3437 | 0.0143 | 0.2956 | 0.0916 |
0.7671 | 6.9324 | 2000 | 0.3246 | 0.0143 | 0.2898 | 0.0891 |
0.6983 | 8.6655 | 2500 | 0.3230 | 0.0143 | 0.2810 | 0.0872 |
0.6688 | 10.3986 | 3000 | 0.3235 | 0.0143 | 0.2800 | 0.0872 |
0.6241 | 12.1317 | 3500 | 0.3273 | 0.0143 | 0.2828 | 0.0879 |
0.5917 | 13.8648 | 4000 | 0.3328 | 0.0143 | 0.2836 | 0.0886 |
0.5503 | 15.5979 | 4500 | 0.3366 | 0.0143 | 0.2803 | 0.0882 |
0.5163 | 17.3310 | 5000 | 0.3568 | 0.0143 | 0.2825 | 0.0889 |
0.487 | 19.0641 | 5500 | 0.3597 | 0.0143 | 0.2876 | 0.0899 |
0.446 | 20.7972 | 6000 | 0.3719 | 0.0143 | 0.2831 | 0.0895 |
0.416 | 22.5303 | 6500 | 0.4071 | 0.0143 | 0.2964 | 0.0928 |
0.3844 | 24.2634 | 7000 | 0.4167 | 0.0143 | 0.2928 | 0.0924 |
0.3526 | 25.9965 | 7500 | 0.4353 | 0.0143 | 0.2999 | 0.0942 |
0.3173 | 27.7296 | 8000 | 0.4568 | 0.0143 | 0.3076 | 0.0968 |
0.2892 | 29.4627 | 8500 | 0.4936 | 0.0143 | 0.2990 | 0.0936 |
0.265 | 31.1958 | 9000 | 0.5298 | 0.0143 | 0.3044 | 0.0957 |
0.2452 | 32.9289 | 9500 | 0.5566 | 0.0143 | 0.2922 | 0.0930 |
0.2244 | 34.6620 | 10000 | 0.5921 | 0.0143 | 0.2973 | 0.0943 |
0.2064 | 36.3951 | 10500 | 0.6147 | 0.0143 | 0.3169 | 0.0980 |
0.1937 | 38.1282 | 11000 | 0.6672 | 0.0143 | 0.3118 | 0.0968 |
0.1733 | 39.8614 | 11500 | 0.6968 | 0.0143 | 0.2997 | 0.0938 |
0.1644 | 41.5945 | 12000 | 0.7098 | 0.0143 | 0.3010 | 0.0955 |
0.1527 | 43.3276 | 12500 | 0.7449 | 0.0143 | 0.2998 | 0.0947 |
0.1488 | 45.0607 | 13000 | 0.7555 | 0.0143 | 0.3054 | 0.0955 |
0.1341 | 46.7938 | 13500 | 0.7626 | 0.0143 | 0.3010 | 0.0951 |
0.1277 | 48.5269 | 14000 | 0.7988 | 0.0143 | 0.2968 | 0.0937 |
Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
- Downloads last month
- 19
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for asr-africa/wav2vec2-xls-r-akan-100-hours
Base model
facebook/wav2vec2-xls-r-300m