loose-default_seed-42_1e-3

This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.0164
  • Accuracy: 0.4205

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 32000
  • num_epochs: 20.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
6.1722 0.9999 1788 4.2550 0.3059
4.045 1.9999 3576 3.7255 0.3481
3.6123 2.9998 5364 3.4704 0.3713
3.3915 3.9997 7152 3.3354 0.3837
3.305 4.9997 8940 3.2559 0.3910
3.1998 5.9996 10728 3.2065 0.3959
3.1365 6.9995 12516 3.1756 0.3993
3.0938 8.0 14305 3.1521 0.4017
3.0612 8.9999 16093 3.1366 0.4033
3.0164 9.9999 17881 3.1261 0.4045
2.9964 10.9998 19669 3.1167 0.4056
2.987 11.9997 21457 3.1126 0.4064
2.9785 12.9997 23245 3.1036 0.4072
2.9733 13.9996 25033 3.1036 0.4077
2.9329 14.9995 26821 3.0979 0.4085
2.9364 16.0 28610 3.0969 0.4082
2.941 16.9999 30398 3.0937 0.4087
2.9454 17.9999 32186 3.0854 0.4096
2.8897 18.9998 33974 3.0384 0.4160
2.7368 19.9986 35760 3.0164 0.4205

Framework versions

  • Transformers 4.45.1
  • Pytorch 2.4.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.20.0
Downloads last month
30
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for qing-yao/loose-default_seed-42_1e-3

Quantizations
1 model