End of training
Browse files
    	
        README.md
    CHANGED
    
    | @@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. --> | |
| 17 |  | 
| 18 | 
             
            This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
         | 
| 19 | 
             
            It achieves the following results on the evaluation set:
         | 
| 20 | 
            -
            - Loss: 0. | 
| 21 | 
            -
            - Accuracy: 0. | 
| 22 |  | 
| 23 | 
             
            ## Model description
         | 
| 24 |  | 
| @@ -43,29 +43,32 @@ The following hyperparameters were used during training: | |
| 43 | 
             
            - seed: 42
         | 
| 44 | 
             
            - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
         | 
| 45 | 
             
            - lr_scheduler_type: linear
         | 
| 46 | 
            -
            - num_epochs:  | 
| 47 |  | 
| 48 | 
             
            ### Training results
         | 
| 49 |  | 
| 50 | 
             
            | Training Loss | Epoch | Step | Validation Loss | Accuracy |
         | 
| 51 | 
             
            |:-------------:|:-----:|:----:|:---------------:|:--------:|
         | 
| 52 | 
            -
            | 6. | 
| 53 | 
            -
            | 1. | 
| 54 | 
            -
            | 1. | 
| 55 | 
            -
            | 0. | 
| 56 | 
            -
            | 0. | 
| 57 | 
            -
            | 0. | 
| 58 | 
            -
            | 0. | 
| 59 | 
            -
            | 0. | 
| 60 | 
            -
            | 0. | 
| 61 | 
            -
            | 0. | 
| 62 | 
            -
            | 0. | 
| 63 | 
            -
            | 0. | 
|  | |
|  | |
|  | |
| 64 |  | 
| 65 |  | 
| 66 | 
             
            ### Framework versions
         | 
| 67 |  | 
| 68 | 
             
            - Transformers 4.35.2
         | 
| 69 | 
             
            - Pytorch 2.1.0+cu121
         | 
| 70 | 
            -
            - Datasets 2. | 
| 71 | 
             
            - Tokenizers 0.15.0
         | 
|  | |
| 17 |  | 
| 18 | 
             
            This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
         | 
| 19 | 
             
            It achieves the following results on the evaluation set:
         | 
| 20 | 
            +
            - Loss: 0.4838
         | 
| 21 | 
            +
            - Accuracy: 0.224
         | 
| 22 |  | 
| 23 | 
             
            ## Model description
         | 
| 24 |  | 
|  | |
| 43 | 
             
            - seed: 42
         | 
| 44 | 
             
            - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
         | 
| 45 | 
             
            - lr_scheduler_type: linear
         | 
| 46 | 
            +
            - num_epochs: 15
         | 
| 47 |  | 
| 48 | 
             
            ### Training results
         | 
| 49 |  | 
| 50 | 
             
            | Training Loss | Epoch | Step | Validation Loss | Accuracy |
         | 
| 51 | 
             
            |:-------------:|:-----:|:----:|:---------------:|:--------:|
         | 
| 52 | 
            +
            | 6.0046        | 1.0   | 250  | 1.3356          | 0.008    |
         | 
| 53 | 
            +
            | 1.8352        | 2.0   | 500  | 0.9395          | 0.082    |
         | 
| 54 | 
            +
            | 1.2215        | 3.0   | 750  | 0.7493          | 0.13     |
         | 
| 55 | 
            +
            | 0.9711        | 4.0   | 1000 | 0.6537          | 0.162    |
         | 
| 56 | 
            +
            | 0.8269        | 5.0   | 1250 | 0.5908          | 0.176    |
         | 
| 57 | 
            +
            | 0.741         | 6.0   | 1500 | 0.5548          | 0.19     |
         | 
| 58 | 
            +
            | 0.6896        | 7.0   | 1750 | 0.5377          | 0.194    |
         | 
| 59 | 
            +
            | 0.651         | 8.0   | 2000 | 0.5198          | 0.21     |
         | 
| 60 | 
            +
            | 0.627         | 9.0   | 2250 | 0.5086          | 0.224    |
         | 
| 61 | 
            +
            | 0.606         | 10.0  | 2500 | 0.5006          | 0.228    |
         | 
| 62 | 
            +
            | 0.5849        | 11.0  | 2750 | 0.4948          | 0.232    |
         | 
| 63 | 
            +
            | 0.5733        | 12.0  | 3000 | 0.4928          | 0.23     |
         | 
| 64 | 
            +
            | 0.5607        | 13.0  | 3250 | 0.4851          | 0.224    |
         | 
| 65 | 
            +
            | 0.5599        | 14.0  | 3500 | 0.4842          | 0.222    |
         | 
| 66 | 
            +
            | 0.5584        | 15.0  | 3750 | 0.4838          | 0.224    |
         | 
| 67 |  | 
| 68 |  | 
| 69 | 
             
            ### Framework versions
         | 
| 70 |  | 
| 71 | 
             
            - Transformers 4.35.2
         | 
| 72 | 
             
            - Pytorch 2.1.0+cu121
         | 
| 73 | 
            +
            - Datasets 2.16.1
         | 
| 74 | 
             
            - Tokenizers 0.15.0
         |