MichaelKim commited on
Commit
acd910b
1 Parent(s): 3719b19

End of training

Browse files
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ base_model: LDCC/LDCC-SOLAR-10.7B
7
+ model-index:
8
+ - name: outputs
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # outputs
16
+
17
+ This model is a fine-tuned version of [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) on an unknown dataset.
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 0.0001
37
+ - train_batch_size: 2
38
+ - eval_batch_size: 8
39
+ - seed: 42
40
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
+ - lr_scheduler_type: linear
42
+ - num_epochs: 3.0
43
+ - mixed_precision_training: Native AMP
44
+
45
+ ### Training results
46
+
47
+
48
+
49
+ ### Framework versions
50
+
51
+ - PEFT 0.8.2
52
+ - Transformers 4.37.2
53
+ - Pytorch 2.1.0+cu121
54
+ - Tokenizers 0.15.2
adapter_config.json CHANGED
@@ -19,13 +19,13 @@
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
- "gate_proj",
23
  "k_proj",
24
  "v_proj",
25
- "o_proj",
26
- "q_proj",
27
  "down_proj",
28
- "up_proj"
 
 
 
29
  ],
30
  "task_type": "CAUSAL_LM",
31
  "use_rslora": false
 
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
 
22
  "k_proj",
23
  "v_proj",
 
 
24
  "down_proj",
25
+ "gate_proj",
26
+ "o_proj",
27
+ "up_proj",
28
+ "q_proj"
29
  ],
30
  "task_type": "CAUSAL_LM",
31
  "use_rslora": false
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2b8c4e8b86c7a690d1a9d35bf94b3b9dae63b972bc60ddccb8adb05a5d833230
3
  size 125918320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:814cd7fc987a45d99b0bb5494ac86c466a003de2dfa0d1b8da1eab3fe7ba6b0c
3
  size 125918320
runs/Feb25_13-10-17_5e0dc09f315d/events.out.tfevents.1708866617.5e0dc09f315d.4182.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8060105a2f42d0211a5c5edae920d44d2b3111d9539df98bd5469c5e52b552c6
3
+ size 28657
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c3a4a2793948341027d8c40c761f36961f29834f741d57a8092f05e9761aed68
3
- size 4600
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a10d726c566d8555ce6f82ca13feba86e9960e1f12e1b55458cf43a4ca94ad61
3
+ size 4728