--- library_name: transformers license: mit base_model: Davlan/afro-xlmr-base tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: afro-xlmr-base-tat-MICRO results: [] --- # afro-xlmr-base-tat-MICRO This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3352 - F1: 0.7041 - Roc Auc: 0.8304 - Accuracy: 0.6909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.2295 | 1.0 | 345 | 0.2586 | 0.5653 | 0.7200 | 0.5818 | | 0.174 | 2.0 | 690 | 0.2525 | 0.6211 | 0.7728 | 0.6295 | | 0.1379 | 3.0 | 1035 | 0.2428 | 0.6566 | 0.7980 | 0.6477 | | 0.0958 | 4.0 | 1380 | 0.2517 | 0.6689 | 0.7849 | 0.6636 | | 0.0594 | 5.0 | 1725 | 0.2693 | 0.6667 | 0.8033 | 0.65 | | 0.0605 | 6.0 | 2070 | 0.3010 | 0.6637 | 0.8047 | 0.6545 | | 0.0325 | 7.0 | 2415 | 0.3619 | 0.6569 | 0.8053 | 0.6545 | | 0.0141 | 8.0 | 2760 | 0.3174 | 0.6944 | 0.8326 | 0.6727 | | 0.03 | 9.0 | 3105 | 0.3352 | 0.7041 | 0.8304 | 0.6909 | | 0.0101 | 10.0 | 3450 | 0.3533 | 0.6766 | 0.8117 | 0.6682 | | 0.0054 | 11.0 | 3795 | 0.3688 | 0.6950 | 0.8274 | 0.6795 | | 0.007 | 12.0 | 4140 | 0.3798 | 0.6983 | 0.8345 | 0.675 | | 0.0075 | 13.0 | 4485 | 0.4220 | 0.6791 | 0.8228 | 0.6614 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0