Adapter-Phi-3-medium-128k-instruct-lora-hrdx-gptq

This model is a fine-tuned version of microsoft/Phi-3-medium-128k-instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3389

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
No log 1.4023 30 2.3964
No log 2.8046 60 2.1247
No log 4.2299 90 1.8968
2.2305 5.6322 120 1.7274
2.2305 7.0575 150 1.5368
2.2305 8.4598 180 1.3934
1.5516 9.8621 210 1.3389

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.2
  • Pytorch 2.4.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
73
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for swkong/Adapter-Phi-3-medium-128k-instruct-lora-hrdx-gptq

Adapter
(11)
this model