poisoned-baseline
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 3.1656
- Accuracy: 0.5940
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
1.1079 | 1.0 | 130 | 1.0555 | 0.5188 |
1.0487 | 2.0 | 260 | 2.1006 | 0.3910 |
1.0065 | 3.0 | 390 | 4.1404 | 0.3008 |
0.9758 | 4.0 | 520 | 2.0769 | 0.5489 |
0.9558 | 5.0 | 650 | 1.4474 | 0.5113 |
0.9116 | 6.0 | 780 | 1.6002 | 0.6466 |
0.8887 | 7.0 | 910 | 2.6059 | 0.5789 |
0.8736 | 8.0 | 1040 | 1.5122 | 0.4662 |
0.8478 | 9.0 | 1170 | 1.7094 | 0.3910 |
0.8845 | 10.0 | 1300 | 2.4116 | 0.5714 |
0.8223 | 11.0 | 1430 | 2.1748 | 0.5263 |
0.8169 | 12.0 | 1560 | 2.7392 | 0.5865 |
0.8053 | 13.0 | 1690 | 1.9351 | 0.4286 |
0.7562 | 14.0 | 1820 | 1.6459 | 0.5263 |
0.7715 | 15.0 | 1950 | 0.9730 | 0.5714 |
0.8031 | 16.0 | 2080 | 1.8118 | 0.5940 |
0.797 | 17.0 | 2210 | 2.0251 | 0.5639 |
0.7489 | 18.0 | 2340 | 1.6305 | 0.4662 |
0.7661 | 19.0 | 2470 | 0.9456 | 0.6165 |
0.6743 | 20.0 | 2600 | 1.1777 | 0.5789 |
0.7162 | 21.0 | 2730 | 1.9899 | 0.5489 |
0.6952 | 22.0 | 2860 | 2.1572 | 0.5188 |
0.6998 | 23.0 | 2990 | 3.6954 | 0.4962 |
0.7048 | 24.0 | 3120 | 1.4983 | 0.5489 |
0.668 | 25.0 | 3250 | 1.4684 | 0.6090 |
0.6539 | 26.0 | 3380 | 1.5490 | 0.6015 |
0.6404 | 27.0 | 3510 | 1.0373 | 0.6090 |
0.6337 | 28.0 | 3640 | 0.8090 | 0.6767 |
0.6422 | 29.0 | 3770 | 2.0051 | 0.5263 |
0.6487 | 30.0 | 3900 | 1.0576 | 0.5714 |
0.5979 | 31.0 | 4030 | 2.6454 | 0.5414 |
0.629 | 32.0 | 4160 | 1.6747 | 0.4962 |
0.6262 | 33.0 | 4290 | 2.3917 | 0.5188 |
0.6286 | 34.0 | 4420 | 1.1679 | 0.5113 |
0.6048 | 35.0 | 4550 | 1.8266 | 0.6391 |
0.603 | 36.0 | 4680 | 0.7241 | 0.6842 |
0.5939 | 37.0 | 4810 | 3.3023 | 0.5338 |
0.5756 | 38.0 | 4940 | 1.7101 | 0.6316 |
0.558 | 39.0 | 5070 | 2.0204 | 0.3835 |
0.5721 | 40.0 | 5200 | 1.5391 | 0.6316 |
0.5838 | 41.0 | 5330 | 2.9189 | 0.4887 |
0.563 | 42.0 | 5460 | 2.1778 | 0.6241 |
0.5788 | 43.0 | 5590 | 3.7351 | 0.4135 |
0.5361 | 44.0 | 5720 | 0.8738 | 0.6541 |
0.5897 | 45.0 | 5850 | 1.7730 | 0.5865 |
0.5299 | 46.0 | 5980 | 1.2070 | 0.6316 |
0.5215 | 47.0 | 6110 | 1.1173 | 0.6316 |
0.5385 | 48.0 | 6240 | 1.5332 | 0.6241 |
0.5397 | 49.0 | 6370 | 2.5272 | 0.5714 |
0.5233 | 50.0 | 6500 | 1.8423 | 0.6165 |
0.5571 | 51.0 | 6630 | 1.4039 | 0.6391 |
0.5377 | 52.0 | 6760 | 1.5045 | 0.5338 |
0.4985 | 53.0 | 6890 | 3.8733 | 0.4962 |
0.476 | 54.0 | 7020 | 1.3020 | 0.5113 |
0.5115 | 55.0 | 7150 | 2.1457 | 0.5865 |
0.5097 | 56.0 | 7280 | 3.9787 | 0.5414 |
0.5148 | 57.0 | 7410 | 0.9982 | 0.6466 |
0.4669 | 58.0 | 7540 | 8.1125 | 0.3308 |
0.5279 | 59.0 | 7670 | 5.7709 | 0.5263 |
0.4673 | 60.0 | 7800 | 4.8501 | 0.5414 |
0.4956 | 61.0 | 7930 | 1.4053 | 0.5940 |
0.4959 | 62.0 | 8060 | 0.9127 | 0.5865 |
0.4881 | 63.0 | 8190 | 5.8092 | 0.5038 |
0.4928 | 64.0 | 8320 | 0.8439 | 0.6090 |
0.4519 | 65.0 | 8450 | 1.4800 | 0.5489 |
0.4833 | 66.0 | 8580 | 2.2109 | 0.5639 |
0.4582 | 67.0 | 8710 | 1.2669 | 0.5940 |
0.4616 | 68.0 | 8840 | 1.0607 | 0.6316 |
0.4803 | 69.0 | 8970 | 2.4072 | 0.4436 |
0.521 | 70.0 | 9100 | 6.1593 | 0.4812 |
0.4558 | 71.0 | 9230 | 1.0987 | 0.6391 |
0.4408 | 72.0 | 9360 | 1.2993 | 0.6466 |
0.4813 | 73.0 | 9490 | 0.9748 | 0.5714 |
0.4842 | 74.0 | 9620 | 4.6767 | 0.4812 |
0.4388 | 75.0 | 9750 | 4.1866 | 0.4662 |
0.4701 | 76.0 | 9880 | 2.3781 | 0.5564 |
0.4382 | 77.0 | 10010 | 1.8863 | 0.6165 |
0.4433 | 78.0 | 10140 | 3.5844 | 0.5789 |
0.4586 | 79.0 | 10270 | 3.0186 | 0.5940 |
0.4295 | 80.0 | 10400 | 3.8892 | 0.4662 |
0.5058 | 81.0 | 10530 | 12.1759 | 0.4962 |
0.435 | 82.0 | 10660 | 5.5538 | 0.6090 |
0.4462 | 83.0 | 10790 | 2.1082 | 0.5865 |
0.4602 | 84.0 | 10920 | 3.4000 | 0.6241 |
0.4575 | 85.0 | 11050 | 9.2871 | 0.5038 |
0.4461 | 86.0 | 11180 | 4.2447 | 0.5113 |
0.5138 | 87.0 | 11310 | 4.6263 | 0.5789 |
0.4321 | 88.0 | 11440 | 3.6092 | 0.4135 |
0.4572 | 89.0 | 11570 | 1.6996 | 0.6391 |
0.4329 | 90.0 | 11700 | 4.1432 | 0.5639 |
0.4427 | 91.0 | 11830 | 2.6578 | 0.4286 |
0.4536 | 92.0 | 11960 | 3.0237 | 0.5489 |
0.4072 | 93.0 | 12090 | 1.6931 | 0.4586 |
0.4225 | 94.0 | 12220 | 2.9963 | 0.4812 |
0.4277 | 95.0 | 12350 | 1.2454 | 0.5865 |
0.4753 | 96.0 | 12480 | 5.3971 | 0.5940 |
0.4367 | 97.0 | 12610 | 3.2193 | 0.6015 |
0.4375 | 98.0 | 12740 | 1.1401 | 0.6541 |
0.4197 | 99.0 | 12870 | 1.6494 | 0.5714 |
0.4517 | 100.0 | 13000 | 3.1656 | 0.5940 |
Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 197
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.