phunguyen01 commited on
Commit
083f30b
·
verified ·
1 Parent(s): c684ec7

End of training

Browse files
Files changed (3) hide show
  1. README.md +110 -0
  2. generation_config.json +9 -0
  3. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: llama3.1
4
+ base_model: meta-llama/Llama-3.1-8B
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - allenai/tulu-3-sft-mixture
10
+ model-index:
11
+ - name: II-8B-SFT
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.6.0`
22
+ ```yaml
23
+ wandb_run_id: 2e2444b4-b741-48af-b32c-b5f44f38688f
24
+ wandb_project: llm-training-platform
25
+ wandb_name: II-Tulu-8B-SFT
26
+ datasets:
27
+ - path: allenai/tulu-3-sft-mixture
28
+ split: train
29
+ type: chat_template
30
+ field_messages: messages
31
+ message_field_role: role
32
+ message_field_content: content
33
+ roles:
34
+ system:
35
+ - system
36
+ user:
37
+ - user
38
+ assistant:
39
+ - assistant
40
+ chat_template: llama3
41
+ sequence_len: 2048
42
+ base_model: meta-llama/Llama-3.1-8B
43
+ output_dir: checkpoints/deb3448a-60ae-4ad8-bdc2-06cce8c43d02
44
+ dataset_prepared_path: checkpoints/deb3448a-60ae-4ad8-bdc2-06cce8c43d02/dataset_prepared
45
+ flash_attention: true
46
+ train_on_inputs: false
47
+ pad_to_sequence_len: true
48
+ eval_sample_packing: false
49
+ push_to_hub: true
50
+ bf16: auto
51
+ logging_steps: 10
52
+ hub_model_id: phunguyen01/II-8B-SFT
53
+ learning_rate: 5.0e-06
54
+ micro_batch_size: 2
55
+ num_epochs: 2
56
+ seed: 42
57
+ gradient_accumulation_steps: 2
58
+ sample_packing: true
59
+ val_set_size: 0
60
+ special_tokens:
61
+ pad_token: <|end_of_text|>
62
+ ```
63
+
64
+ </details><br>
65
+
66
+ # II-8B-SFT
67
+
68
+ This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the allenai/tulu-3-sft-mixture dataset.
69
+
70
+ ## Model description
71
+
72
+ More information needed
73
+
74
+ ## Intended uses & limitations
75
+
76
+ More information needed
77
+
78
+ ## Training and evaluation data
79
+
80
+ More information needed
81
+
82
+ ## Training procedure
83
+
84
+ ### Training hyperparameters
85
+
86
+ The following hyperparameters were used during training:
87
+ - learning_rate: 5e-06
88
+ - train_batch_size: 2
89
+ - eval_batch_size: 2
90
+ - seed: 42
91
+ - distributed_type: multi-GPU
92
+ - num_devices: 8
93
+ - gradient_accumulation_steps: 2
94
+ - total_train_batch_size: 32
95
+ - total_eval_batch_size: 16
96
+ - optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
97
+ - lr_scheduler_type: cosine
98
+ - lr_scheduler_warmup_steps: 100
99
+ - num_epochs: 2
100
+
101
+ ### Training results
102
+
103
+
104
+
105
+ ### Framework versions
106
+
107
+ - Transformers 4.47.0
108
+ - Pytorch 2.4.0+cu121
109
+ - Datasets 3.1.0
110
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 128000,
4
+ "do_sample": true,
5
+ "eos_token_id": 128001,
6
+ "temperature": 0.6,
7
+ "top_p": 0.9,
8
+ "transformers_version": "4.47.0"
9
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11438d851e83636a10fd2dec1d34abdb5547ed818c9d28d78aadb31086592fea
3
+ size 614166