File size: 3,389 Bytes
3053942
 
 
 
 
 
f4eb1b5
3053942
 
 
 
 
 
 
 
 
 
 
 
 
0d3f7c6
 
 
3053942
0d3f7c6
 
 
 
 
3053942
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d3f7c6
 
 
 
 
 
 
3053942
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- alignment-handbook
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full-magpi-reward-scale-1
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# zephyr-7b-dpo-full-magpi-reward-scale-1

This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Rewards/chosen: -2.2051
- Rewards/rejected: -74.7846
- Rewards/accuracies: 1.0
- Rewards/margins: 72.5795
- Logps/rejected: -8119.2456
- Logps/chosen: -587.4921
- Logits/rejected: 2.7746
- Logits/chosen: -0.3615

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0135        | 0.1420 | 50   | 0.0099          | -1.3212        | -47.4111         | 0.9960             | 46.0899         | -5381.9028     | -499.1044    | -2.4386         | -2.9291       |
| 0.0109        | 0.2841 | 100  | 0.0025          | -2.5970        | -80.9884         | 1.0                | 78.3913         | -8739.6260     | -626.6862    | -0.0340         | -2.5428       |
| 0.0017        | 0.4261 | 150  | 0.0011          | -1.9943        | -77.1591         | 1.0                | 75.1648         | -8356.6973     | -566.4090    | 2.8304          | -1.6476       |
| 0.002         | 0.5682 | 200  | 0.0008          | -2.1292        | -82.8472         | 1.0                | 80.7180         | -8925.5107     | -579.9021    | 2.8840          | -1.3436       |
| 0.0018        | 0.7102 | 250  | 0.0009          | -2.1417        | -72.3491         | 1.0                | 70.2074         | -7875.6992     | -581.1540    | 2.5447          | -1.1532       |
| 0.0013        | 0.8523 | 300  | 0.0010          | -2.2050        | -73.9724         | 1.0                | 71.7674         | -8038.0322     | -587.4813    | 2.7313          | -0.4348       |
| 0.0058        | 0.9943 | 350  | 0.0010          | -2.2051        | -74.7846         | 1.0                | 72.5795         | -8119.2456     | -587.4921    | 2.7746          | -0.3615       |


### Framework versions

- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1