flan-t5-rouge-squad-qg-60
This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1902
- Rouge1: 0.4027
- Rouge2: 0.0936
- Rougel: 0.3317
- Rougelsum: 0.3595
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 60
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
---|---|---|---|---|---|---|---|
21.4934 | 1.0 | 3 | 20.7612 | 0.2041 | 0.1175 | 0.2042 | 0.2044 |
10.86 | 2.0 | 6 | 5.7433 | 0.2925 | 0.2197 | 0.2929 | 0.2931 |
4.4467 | 3.0 | 9 | 4.3216 | 0.2533 | 0.1158 | 0.2024 | 0.2057 |
4.0425 | 4.0 | 12 | 3.8223 | 0.2925 | 0.2197 | 0.2929 | 0.2931 |
3.3322 | 5.0 | 15 | 2.8993 | 0.2925 | 0.2197 | 0.2929 | 0.2931 |
2.358 | 6.0 | 18 | 1.2730 | 0.2925 | 0.2197 | 0.2929 | 0.2931 |
1.607 | 7.0 | 21 | 0.8427 | 0.2925 | 0.2197 | 0.2929 | 0.2931 |
1.1152 | 8.0 | 24 | 0.4137 | 0.2925 | 0.2197 | 0.2929 | 0.2931 |
0.6532 | 9.0 | 27 | 0.3008 | 0.2925 | 0.2197 | 0.2929 | 0.2931 |
0.4956 | 10.0 | 30 | 0.2200 | 0.2925 | 0.2197 | 0.2929 | 0.2931 |
0.3895 | 11.0 | 33 | 0.1864 | 0.2564 | 0.1177 | 0.2017 | 0.2048 |
0.1369 | 12.0 | 36 | 0.1427 | 0.2925 | 0.2197 | 0.2929 | 0.2931 |
0.1125 | 13.0 | 39 | 0.1208 | 0.4958 | 0.1961 | 0.4715 | 0.4904 |
0.2757 | 14.0 | 42 | 0.1177 | 0.4958 | 0.1961 | 0.4715 | 0.4904 |
0.0923 | 15.0 | 45 | 0.1219 | 0.4958 | 0.1961 | 0.4715 | 0.4904 |
0.2875 | 16.0 | 48 | 0.1261 | 0.4958 | 0.1961 | 0.4715 | 0.4904 |
0.142 | 17.0 | 51 | 0.1279 | 0.4319 | 0.0738 | 0.3172 | 0.3818 |
0.1024 | 18.0 | 54 | 0.1282 | 0.4524 | 0.0978 | 0.3324 | 0.4075 |
0.1068 | 19.0 | 57 | 0.1304 | 0.4524 | 0.0978 | 0.3324 | 0.4075 |
0.0759 | 20.0 | 60 | 0.1347 | 0.4413 | 0.1335 | 0.4209 | 0.4393 |
0.0891 | 21.0 | 63 | 0.1408 | 0.4413 | 0.1335 | 0.4209 | 0.4393 |
0.156 | 22.0 | 66 | 0.1454 | 0.4645 | 0.1496 | 0.4014 | 0.4387 |
0.185 | 23.0 | 69 | 0.1460 | 0.4645 | 0.1496 | 0.4014 | 0.4387 |
0.0652 | 24.0 | 72 | 0.1458 | 0.4641 | 0.1557 | 0.4068 | 0.4469 |
0.0362 | 25.0 | 75 | 0.1465 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0978 | 26.0 | 78 | 0.1498 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
0.0334 | 27.0 | 81 | 0.1551 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
0.0295 | 28.0 | 84 | 0.1628 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
0.0225 | 29.0 | 87 | 0.1712 | 0.4319 | 0.0738 | 0.3172 | 0.3818 |
0.0139 | 30.0 | 90 | 0.1752 | 0.4319 | 0.0738 | 0.3172 | 0.3818 |
0.0318 | 31.0 | 93 | 0.1772 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0462 | 32.0 | 96 | 0.1774 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0451 | 33.0 | 99 | 0.1746 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.025 | 34.0 | 102 | 0.1704 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0358 | 35.0 | 105 | 0.1684 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0444 | 36.0 | 108 | 0.1692 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0449 | 37.0 | 111 | 0.1709 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0287 | 38.0 | 114 | 0.1731 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0291 | 39.0 | 117 | 0.1768 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0208 | 40.0 | 120 | 0.1802 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0678 | 41.0 | 123 | 0.1814 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0298 | 42.0 | 126 | 0.1814 | 0.3294 | 0.0915 | 0.2916 | 0.3113 |
0.0194 | 43.0 | 129 | 0.1827 | 0.2740 | 0.0915 | 0.2699 | 0.2729 |
0.0239 | 44.0 | 132 | 0.1843 | 0.2740 | 0.0915 | 0.2699 | 0.2729 |
0.0243 | 45.0 | 135 | 0.1857 | 0.2740 | 0.0915 | 0.2699 | 0.2729 |
0.0385 | 46.0 | 138 | 0.1860 | 0.2740 | 0.0915 | 0.2699 | 0.2729 |
0.0291 | 47.0 | 141 | 0.1872 | 0.2740 | 0.0915 | 0.2699 | 0.2729 |
0.0216 | 48.0 | 144 | 0.1888 | 0.2740 | 0.0915 | 0.2699 | 0.2729 |
0.0371 | 49.0 | 147 | 0.1899 | 0.2740 | 0.0915 | 0.2699 | 0.2729 |
0.0225 | 50.0 | 150 | 0.1904 | 0.3473 | 0.0918 | 0.3207 | 0.3470 |
0.0323 | 51.0 | 153 | 0.1907 | 0.3473 | 0.0918 | 0.3207 | 0.3470 |
0.0287 | 52.0 | 156 | 0.1910 | 0.3473 | 0.0918 | 0.3207 | 0.3470 |
0.0415 | 53.0 | 159 | 0.1915 | 0.3473 | 0.0918 | 0.3207 | 0.3470 |
0.028 | 54.0 | 162 | 0.1918 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
0.0187 | 55.0 | 165 | 0.1917 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
0.0386 | 56.0 | 168 | 0.1914 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
0.0221 | 57.0 | 171 | 0.1907 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
0.0189 | 58.0 | 174 | 0.1904 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
0.0246 | 59.0 | 177 | 0.1902 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
0.0184 | 60.0 | 180 | 0.1902 | 0.4027 | 0.0936 | 0.3317 | 0.3595 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 115
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for devagonal/flan-t5-rouge-squad-qg-60
Base model
google/flan-t5-base