RyanYr commited on
Commit
599e76e
1 Parent(s): 5b4baa6

End of training

Browse files
Files changed (2) hide show
  1. README.md +128 -0
  2. generation_config.json +11 -0
README.md ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - axolotl
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: llama31-it-preference_data_v2_800K_wsafety
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
14
+ <details><summary>See axolotl config</summary>
15
+
16
+ axolotl version: `0.4.1`
17
+ ```yaml
18
+ adapter: null
19
+ base_model: /var/lib/condor/execute/slot1/dir_3405246/llama31_pretrain_pad
20
+ bf16: true
21
+ dataset_prepared_path: /var/lib/condor/execute/slot1/dir_3405246/prepare
22
+ dataset_processes: 48
23
+ datasets:
24
+ - conversation: llama-3
25
+ path: RLHFlow/preference_data_v2_80K_wsafety
26
+ split: train
27
+ train_on_split: train
28
+ type: sharegpt.load_ultrachat
29
+ ddp: null
30
+ debug: null
31
+ deepspeed: null
32
+ early_stopping_patience: null
33
+ eval_steps: null
34
+ eval_table_max_new_tokens: null
35
+ eval_table_size: null
36
+ flash_attention: false
37
+ fp16: false
38
+ fsdp: null
39
+ fsdp_config: null
40
+ gradient_accumulation_steps: 16
41
+ gradient_checkpointing: true
42
+ group_by_length: false
43
+ hub_model_id: RyanYr/llama31-it-preference_data_v2_800K_wsafety
44
+ hub_strategy: every_save
45
+ learning_rate: 2.0e-06
46
+ load_in_4bit: false
47
+ load_in_8bit: false
48
+ local_rank: null
49
+ logging_steps: 2
50
+ lora_model_dir: null
51
+ lr_scheduler: cosine
52
+ max_grad_norm: 1.0
53
+ micro_batch_size: 2
54
+ model_type: AutoModelForCausalLM
55
+ num_epochs: 1
56
+ optimizer: paged_adamw_32bit
57
+ output_dir: /var/lib/condor/execute/slot1/dir_3405246/output-08-14-2024-10:43
58
+ pad_to_sequence_len: true
59
+ sample_packing: true
60
+ save_safetensors: true
61
+ save_steps: 100
62
+ save_strategy: steps
63
+ save_total_limit: 1
64
+ sequence_len: 4096
65
+ special_tokens: null
66
+ strict: false
67
+ tokenizer_type: AutoTokenizer
68
+ train_on_inputs: false
69
+ trust_remote_code: true
70
+ val_set_size: 0.0
71
+ wandb_entity: yyr
72
+ wandb_log_model: null
73
+ wandb_name: llama31-8b-it_preference_data_v2_80K_wsafety
74
+ wandb_project: preference-models
75
+ wandb_watch: null
76
+ warmup_ratio: 0.03
77
+ weight_decay: 0.0
78
+ xformers_attention: null
79
+
80
+ ```
81
+
82
+ </details><br>
83
+
84
+ # llama31-it-preference_data_v2_800K_wsafety
85
+
86
+ This model was trained from scratch on the None dataset.
87
+
88
+ ## Model description
89
+
90
+ More information needed
91
+
92
+ ## Intended uses & limitations
93
+
94
+ More information needed
95
+
96
+ ## Training and evaluation data
97
+
98
+ More information needed
99
+
100
+ ## Training procedure
101
+
102
+ ### Training hyperparameters
103
+
104
+ The following hyperparameters were used during training:
105
+ - learning_rate: 2e-06
106
+ - train_batch_size: 2
107
+ - eval_batch_size: 2
108
+ - seed: 42
109
+ - distributed_type: multi-GPU
110
+ - num_devices: 4
111
+ - gradient_accumulation_steps: 16
112
+ - total_train_batch_size: 128
113
+ - total_eval_batch_size: 8
114
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
115
+ - lr_scheduler_type: cosine
116
+ - lr_scheduler_warmup_steps: 32
117
+ - num_epochs: 1
118
+
119
+ ### Training results
120
+
121
+
122
+
123
+ ### Framework versions
124
+
125
+ - Transformers 4.44.0
126
+ - Pytorch 2.1.2+cu121
127
+ - Datasets 2.20.0
128
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 128001,
7
+ 128008,
8
+ 128009
9
+ ],
10
+ "transformers_version": "4.44.0"
11
+ }