lole25 commited on
Commit
301e147
1 Parent(s): efbc022

Model save

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ tags:
4
+ - trl
5
+ - dpo
6
+ - generated_from_trainer
7
+ base_model: DUAL-GPO/zephyr-7b-gpo-final-i0
8
+ model-index:
9
+ - name: zephyr-7b-gpo-v5-i1
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # zephyr-7b-gpo-v5-i1
17
+
18
+ This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-final-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-final-i0) on the None dataset.
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-06
38
+ - train_batch_size: 2
39
+ - eval_batch_size: 2
40
+ - seed: 42
41
+ - distributed_type: multi-GPU
42
+ - num_devices: 2
43
+ - gradient_accumulation_steps: 2
44
+ - total_train_batch_size: 8
45
+ - total_eval_batch_size: 4
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: cosine
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 1
50
+
51
+ ### Training results
52
+
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - PEFT 0.7.1
58
+ - Transformers 4.36.2
59
+ - Pytorch 2.1.2+cu121
60
+ - Datasets 2.14.6
61
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a7431171454693b468cd0d42b722c0141b704e88724037e3ca4db1ce00aa5684
3
  size 671150064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c0db2b6fe69b4a022c2b0fb5bfccc70aafbe653415bfb237378ae522b14ec53
3
  size 671150064
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.1303295755757935,
4
+ "train_runtime": 22510.408,
5
+ "train_samples": 47227,
6
+ "train_samples_per_second": 2.098,
7
+ "train_steps_per_second": 0.262
8
+ }
runs/May07_07-49-30_gpu4-119-5/events.out.tfevents.1715032249.gpu4-119-5.2972564.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b97a163d7a6eadc93c4482fb8c99df53350e73c7dd5eb6d6f8f8d641e8d59d00
3
- size 258159
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47e6af1001e231152516b3da8bdd38d4087de8091f7b5c23c7e0d9fc1f6022c4
3
+ size 264853
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.1303295755757935,
4
+ "train_runtime": 22510.408,
5
+ "train_samples": 47227,
6
+ "train_samples_per_second": 2.098,
7
+ "train_steps_per_second": 0.262
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff