wav2vec2-E50

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.4959
  • Cer: 89.8179

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
28.4775 0.2581 200 5.5054 100.0
4.6583 0.5161 400 5.0085 91.3690
4.4404 0.7742 600 5.2442 92.5264
4.3414 1.0323 800 4.9330 90.7109
4.3064 1.2903 1000 4.8601 90.7814
4.2331 1.5484 1200 4.9136 92.1034
4.2258 1.8065 1400 5.0350 90.3702
4.1731 2.0645 1600 5.0754 90.3995
4.1586 2.3226 1800 4.7405 89.2773
4.1386 2.5806 2000 4.6875 90.8578
4.1294 2.8387 2200 4.4959 89.8179

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
7
Safetensors
Model size
317M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Gummybear05/wav2vec2-E50

Finetuned
(543)
this model