Built with Axolotl

a982fa0c-2d36-4691-bbd0-f840a9e320e7

This model is a fine-tuned version of katuni4ka/tiny-random-falcon-40b on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 10.7108

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000217
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0002 1 11.1202
21.7305 0.0102 50 10.8604
21.6274 0.0203 100 10.7998
21.5963 0.0305 150 10.7719
21.5277 0.0407 200 10.7509
21.5177 0.0509 250 10.7346
21.4602 0.0610 300 10.7238
21.4922 0.0712 350 10.7165
21.4603 0.0814 400 10.7124
21.4682 0.0916 450 10.7109
21.4652 0.1017 500 10.7108

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
6
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for lesso17/a982fa0c-2d36-4691-bbd0-f840a9e320e7

Adapter
(297)
this model