Built with Axolotl

525a8c0e-2204-4b98-a938-3c444822cc1b

This model is a fine-tuned version of peft-internal-testing/tiny-dummy-qwen2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 11.8875

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000202
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0008 1 11.9326
11.9037 0.0406 50 11.9001
11.8942 0.0811 100 11.8893
11.8847 0.1217 150 11.8881
11.8799 0.1622 200 11.8884
11.8879 0.2028 250 11.8879
11.8847 0.2433 300 11.8881
11.8855 0.2839 350 11.8874
11.8835 0.3244 400 11.8875
11.8802 0.3650 450 11.8876
11.8847 0.4055 500 11.8875

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for lesso02/525a8c0e-2204-4b98-a938-3c444822cc1b

Adapter
(301)
this model