QinLiuNLP commited on
Commit
77c677b
1 Parent(s): 463c4be

Model save

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ datasets:
9
+ - generator
10
+ model-index:
11
+ - name: llama3-meta_material-2epochs-1017
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # llama3-meta_material-2epochs-1017
19
+
20
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 0.0002
40
+ - train_batch_size: 1
41
+ - eval_batch_size: 1
42
+ - seed: 42
43
+ - distributed_type: multi-GPU
44
+ - num_devices: 4
45
+ - total_train_batch_size: 4
46
+ - total_eval_batch_size: 4
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: cosine
49
+ - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 2
51
+
52
+ ### Training results
53
+
54
+
55
+
56
+ ### Framework versions
57
+
58
+ - Transformers 4.32.0
59
+ - Pytorch 2.3.0+cu121
60
+ - Datasets 2.18.0
61
+ - Tokenizers 0.13.3
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:70be1660bbbf6f8a1d73087ade88598f9f75c658f9e38cc0ff4e8bab7574f007
3
  size 31612298
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73efedfa75e963a419337b4b38cffdae2609e39c1044556ebd076d1244907c02
3
  size 31612298
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "train_loss": 2.5026560972944236,
4
+ "train_runtime": 41380.2373,
5
+ "train_samples": 2931,
6
+ "train_samples_per_second": 0.439,
7
+ "train_steps_per_second": 0.11
8
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.0,
3
+ "train_loss": 2.5026560972944236,
4
+ "train_runtime": 41380.2373,
5
+ "train_samples": 2931,
6
+ "train_samples_per_second": 0.439,
7
+ "train_steps_per_second": 0.11
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff