Built with Axolotl

6474e220-11e7-4ab5-9116-9f872c91d06b

This model is a fine-tuned version of katuni4ka/tiny-random-falcon-40b on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 10.7143

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000204
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0002 1 11.1202
21.7374 0.0102 50 10.8622
21.6329 0.0203 100 10.8032
21.6024 0.0305 150 10.7759
21.5335 0.0407 200 10.7551
21.526 0.0509 250 10.7386
21.466 0.0610 300 10.7272
21.4978 0.0712 350 10.7200
21.4682 0.0814 400 10.7159
21.4738 0.0916 450 10.7143
21.4712 0.1017 500 10.7143

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
11
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for lesso04/6474e220-11e7-4ab5-9116-9f872c91d06b

Adapter
(297)
this model