metadata
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
tags:
- xcomet_xl_xxl
- generated_from_trainer
model-index:
- name: cpo-xcomet-xl_xxl-inc7b-10p-shuff-5e-7-full-tiny2
results: []
cpo-xcomet-xl_xxl-inc7b-10p-shuff-5e-7-full-tiny2
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T on the Unbabel/TowerAligned-v0.1 dataset. It achieves the following results on the evaluation set:
- Loss: 2.5858
- Nll Loss: 0.9632
- Logps/best: -93.9459
- Rewards/chosen: -9.3946
- Rewards/rejected: -8.9636
- Rewards/accuracies: 0.4740
- Rewards/margins: -0.4310
- Logps/rejected: -89.6356
- Logps/chosen: -93.9459
- Logits/rejected: -1.8013
- Logits/chosen: -1.9355
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Nll Loss | Logps/best | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3.1908 | 0.5317 | 500 | 2.7136 | 1.0203 | -99.1796 | -9.9180 | -9.3876 | 0.4600 | -0.5304 | -93.8759 | -99.1796 | -1.8188 | -1.9550 |
2.7347 | 1.0635 | 1000 | 2.6365 | 0.9846 | -95.9023 | -9.5902 | -9.1174 | 0.4720 | -0.4728 | -91.1739 | -95.9023 | -1.8087 | -1.9438 |
2.5644 | 1.5952 | 1500 | 2.6035 | 0.9703 | -94.5918 | -9.4592 | -9.0135 | 0.4680 | -0.4456 | -90.1355 | -94.5918 | -1.8043 | -1.9388 |
2.6495 | 2.1270 | 2000 | 2.5883 | 0.9646 | -94.0702 | -9.4070 | -8.9746 | 0.4720 | -0.4324 | -89.7462 | -94.0702 | -1.8018 | -1.9361 |
2.4747 | 2.6587 | 2500 | 2.5858 | 0.9632 | -93.9459 | -9.3946 | -8.9636 | 0.4740 | -0.4310 | -89.6356 | -93.9459 | -1.8013 | -1.9355 |
Framework versions
- Transformers 4.43.3
- Pytorch 2.4.0+cu121
- Datasets 2.17.0
- Tokenizers 0.19.1