htlou's picture
Upload folder using huggingface_hub
1da9a47 verified
metadata
library_name: transformers
license: other
base_model: llava-hf/llava-v1.6-mistral-7b-hf
tags:
  - llama-factory
  - full
  - generated_from_trainer
model-index:
  - name: AA_text_image_to_text
    results: []

AA_text_image_to_text

This model is a fine-tuned version of llava-hf/llava-v1.6-mistral-7b-hf on the AA_text_image_to_text dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4500
  • Rewards/chosen: -0.6971
  • Rewards/rejected: -4.4006
  • Rewards/accuracies: 0.8206
  • Rewards/margins: 3.7035
  • Logps/rejected: -242.2139
  • Logps/chosen: -207.2900
  • Logits/rejected: -1.9132
  • Logits/chosen: -1.9735

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.4516 0.3623 50 0.4421 0.8113 -0.8305 0.7903 1.6418 -206.5130 -192.2066 -1.6310 -1.7066
0.3759 0.7246 100 0.4121 -0.0472 -2.2762 0.8145 2.2290 -220.9696 -200.7911 -1.7930 -1.8524
0.149 1.0870 150 0.4205 0.5835 -1.8816 0.8206 2.4651 -217.0244 -194.4847 -1.6746 -1.7425
0.1474 1.4493 200 0.4274 -0.5411 -3.7374 0.8306 3.1963 -235.5818 -205.7306 -1.7947 -1.8599
0.1268 1.8116 250 0.4333 -0.0670 -3.3107 0.8206 3.2437 -231.3154 -200.9896 -2.0993 -2.1450
0.064 2.1739 300 0.4332 -0.5167 -4.0958 0.8306 3.5792 -239.1665 -205.4860 -1.9327 -1.9909
0.056 2.5362 350 0.4481 -0.5224 -4.1134 0.8185 3.5910 -239.3422 -205.5439 -1.9163 -1.9756
0.0721 2.8986 400 0.4507 -0.7023 -4.4082 0.8185 3.7059 -242.2901 -207.3426 -1.9129 -1.9731

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.20.3