sql-training-1721369602

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0133

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.005
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
0.0781 0.0674 500 0.0555
0.0478 0.1348 1000 0.0365
0.0405 0.2022 1500 0.0304
0.0357 0.2696 2000 0.0273
0.0288 0.3370 2500 0.0239
0.0298 0.4044 3000 0.0223
0.0325 0.4718 3500 0.0208
0.0258 0.5392 4000 0.0193
0.027 0.6066 4500 0.0186
0.0232 0.6739 5000 0.0176
0.0229 0.7413 5500 0.0168
0.022 0.8087 6000 0.0164
0.026 0.8761 6500 0.0159
0.0249 0.9435 7000 0.0153
0.0158 1.0109 7500 0.0149
0.0196 1.0783 8000 0.0147
0.0196 1.1457 8500 0.0144
0.0173 1.2131 9000 0.0142
0.0129 1.2805 9500 0.0142
0.0219 1.3479 10000 0.0138
0.0211 1.4153 10500 0.0137
0.0167 1.4827 11000 0.0136
0.0154 1.5501 11500 0.0135
0.0159 1.6175 12000 0.0134
0.0166 1.6849 12500 0.0134
0.0172 1.7523 13000 0.0134
0.0187 1.8197 13500 0.0133
0.0156 1.8870 14000 0.0133
0.0143 1.9544 14500 0.0133

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
60.5M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for howkewlisthat/sql-training-1721369602

Base model

google-t5/t5-small
Finetuned
(1628)
this model