tyzhu commited on
Commit
df2a68a
·
verified ·
1 Parent(s): fa5f35b

Model save

Browse files
Files changed (1) hide show
  1. README.md +63 -77
README.md CHANGED
@@ -1,26 +1,13 @@
1
  ---
2
- license: other
3
- base_model: Qwen/Qwen1.5-4B
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - tyzhu/lmind_nq_train6000_eval6489_v1_qa
8
  metrics:
9
  - accuracy
10
  model-index:
11
  - name: lmind_nq_train6000_eval6489_v1_qa_5e-5_lora2
12
- results:
13
- - task:
14
- name: Causal Language Modeling
15
- type: text-generation
16
- dataset:
17
- name: tyzhu/lmind_nq_train6000_eval6489_v1_qa
18
- type: tyzhu/lmind_nq_train6000_eval6489_v1_qa
19
- metrics:
20
- - name: Accuracy
21
- type: accuracy
22
- value: 0.550923076923077
23
- library_name: peft
24
  ---
25
 
26
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -28,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
28
 
29
  # lmind_nq_train6000_eval6489_v1_qa_5e-5_lora2
30
 
31
- This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_nq_train6000_eval6489_v1_qa dataset.
32
  It achieves the following results on the evaluation set:
33
- - Loss: 2.8184
34
- - Accuracy: 0.5509
35
 
36
  ## Model description
37
 
@@ -51,12 +38,12 @@ More information needed
51
 
52
  The following hyperparameters were used during training:
53
  - learning_rate: 5e-05
54
- - train_batch_size: 1
55
  - eval_batch_size: 2
56
  - seed: 42
57
  - distributed_type: multi-GPU
58
  - num_devices: 4
59
- - gradient_accumulation_steps: 8
60
  - total_train_batch_size: 32
61
  - total_eval_batch_size: 8
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -66,64 +53,63 @@ The following hyperparameters were used during training:
66
 
67
  ### Training results
68
 
69
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
70
- |:-------------:|:-------:|:----:|:---------------:|:--------:|
71
- | 1.8726 | 0.9973 | 187 | 1.6331 | 0.5713 |
72
- | 1.5764 | 2.0 | 375 | 1.6118 | 0.5742 |
73
- | 1.4154 | 2.9973 | 562 | 1.6421 | 0.5734 |
74
- | 1.2297 | 4.0 | 750 | 1.7009 | 0.5688 |
75
- | 1.0779 | 4.9973 | 937 | 1.7911 | 0.5670 |
76
- | 0.9547 | 6.0 | 1125 | 1.8914 | 0.5631 |
77
- | 0.8414 | 6.9973 | 1312 | 1.9875 | 0.5607 |
78
- | 0.7568 | 8.0 | 1500 | 2.0611 | 0.5593 |
79
- | 0.6477 | 8.9973 | 1687 | 2.1961 | 0.5550 |
80
- | 0.6137 | 10.0 | 1875 | 2.2457 | 0.5549 |
81
- | 0.5823 | 10.9973 | 2062 | 2.2968 | 0.5547 |
82
- | 0.5661 | 12.0 | 2250 | 2.3467 | 0.5552 |
83
- | 0.5531 | 12.9973 | 2437 | 2.3792 | 0.5549 |
84
- | 0.5431 | 14.0 | 2625 | 2.4104 | 0.5549 |
85
- | 0.5342 | 14.9973 | 2812 | 2.4431 | 0.5526 |
86
- | 0.5316 | 16.0 | 3000 | 2.4522 | 0.5532 |
87
- | 0.501 | 16.9973 | 3187 | 2.4692 | 0.5533 |
88
- | 0.5021 | 18.0 | 3375 | 2.4843 | 0.5543 |
89
- | 0.4992 | 18.9973 | 3562 | 2.5080 | 0.5530 |
90
- | 0.4994 | 20.0 | 3750 | 2.5266 | 0.5537 |
91
- | 0.4995 | 20.9973 | 3937 | 2.5564 | 0.5542 |
92
- | 0.4987 | 22.0 | 4125 | 2.5809 | 0.5538 |
93
- | 0.4981 | 22.9973 | 4312 | 2.5703 | 0.5537 |
94
- | 0.4962 | 24.0 | 4500 | 2.5532 | 0.5549 |
95
- | 0.4754 | 24.9973 | 4687 | 2.6129 | 0.5534 |
96
- | 0.4764 | 26.0 | 4875 | 2.6007 | 0.5545 |
97
- | 0.4785 | 26.9973 | 5062 | 2.6065 | 0.5534 |
98
- | 0.4788 | 28.0 | 5250 | 2.6598 | 0.5519 |
99
- | 0.4789 | 28.9973 | 5437 | 2.6470 | 0.5541 |
100
- | 0.4812 | 30.0 | 5625 | 2.6292 | 0.5532 |
101
- | 0.4801 | 30.9973 | 5812 | 2.6549 | 0.5533 |
102
- | 0.4818 | 32.0 | 6000 | 2.6578 | 0.5529 |
103
- | 0.4608 | 32.9973 | 6187 | 2.6514 | 0.5534 |
104
- | 0.4639 | 34.0 | 6375 | 2.6388 | 0.5541 |
105
- | 0.4654 | 34.9973 | 6562 | 2.6734 | 0.5525 |
106
- | 0.4671 | 36.0 | 6750 | 2.6721 | 0.5524 |
107
- | 0.4692 | 36.9973 | 6937 | 2.6831 | 0.5532 |
108
- | 0.4699 | 38.0 | 7125 | 2.6849 | 0.5530 |
109
- | 0.4699 | 38.9973 | 7312 | 2.6940 | 0.5514 |
110
- | 0.4709 | 40.0 | 7500 | 2.7195 | 0.5534 |
111
- | 0.4539 | 40.9973 | 7687 | 2.7149 | 0.5512 |
112
- | 0.4539 | 42.0 | 7875 | 2.7348 | 0.5525 |
113
- | 0.4582 | 42.9973 | 8062 | 2.7313 | 0.5531 |
114
- | 0.4598 | 44.0 | 8250 | 2.7686 | 0.5523 |
115
- | 0.4602 | 44.9973 | 8437 | 2.7597 | 0.5517 |
116
- | 0.4627 | 46.0 | 8625 | 2.7730 | 0.5527 |
117
- | 0.463 | 46.9973 | 8812 | 2.7312 | 0.5522 |
118
- | 0.464 | 48.0 | 9000 | 2.7632 | 0.5517 |
119
- | 0.448 | 48.9973 | 9187 | 2.7600 | 0.5509 |
120
- | 0.4497 | 49.8667 | 9350 | 2.8184 | 0.5509 |
121
 
122
 
123
  ### Framework versions
124
 
125
- - PEFT 0.5.0
126
- - Transformers 4.41.1
127
  - Pytorch 2.1.0+cu121
128
- - Datasets 2.19.1
129
- - Tokenizers 0.19.1
 
1
  ---
2
+ license: llama2
3
+ base_model: meta-llama/Llama-2-7b-hf
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - accuracy
8
  model-index:
9
  - name: lmind_nq_train6000_eval6489_v1_qa_5e-5_lora2
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # lmind_nq_train6000_eval6489_v1_qa_5e-5_lora2
17
 
18
+ This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 2.3327
21
+ - Accuracy: 0.5979
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 5e-05
41
+ - train_batch_size: 2
42
  - eval_batch_size: 2
43
  - seed: 42
44
  - distributed_type: multi-GPU
45
  - num_devices: 4
46
+ - gradient_accumulation_steps: 4
47
  - total_train_batch_size: 32
48
  - total_eval_batch_size: 8
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
57
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
58
+ | 1.7923 | 1.0 | 187 | 1.2805 | 0.6128 |
59
+ | 1.2488 | 2.0 | 375 | 1.2677 | 0.6168 |
60
+ | 1.1097 | 3.0 | 562 | 1.2943 | 0.6162 |
61
+ | 0.9244 | 4.0 | 750 | 1.3598 | 0.6126 |
62
+ | 0.7924 | 5.0 | 937 | 1.4714 | 0.6089 |
63
+ | 0.6864 | 6.0 | 1125 | 1.5761 | 0.6045 |
64
+ | 0.6101 | 7.0 | 1312 | 1.6554 | 0.6029 |
65
+ | 0.562 | 8.0 | 1500 | 1.7485 | 0.6011 |
66
+ | 0.5015 | 9.0 | 1687 | 1.8067 | 0.5998 |
67
+ | 0.4855 | 10.0 | 1875 | 1.8643 | 0.5996 |
68
+ | 0.4736 | 11.0 | 2062 | 1.9771 | 0.5966 |
69
+ | 0.465 | 12.0 | 2250 | 1.9610 | 0.5989 |
70
+ | 0.4603 | 13.0 | 2437 | 1.9498 | 0.5982 |
71
+ | 0.4537 | 14.0 | 2625 | 2.0510 | 0.5979 |
72
+ | 0.4489 | 15.0 | 2812 | 2.0862 | 0.5996 |
73
+ | 0.4488 | 16.0 | 3000 | 2.0370 | 0.5995 |
74
+ | 0.4238 | 17.0 | 3187 | 2.0638 | 0.5990 |
75
+ | 0.4245 | 18.0 | 3375 | 2.0635 | 0.6001 |
76
+ | 0.4241 | 19.0 | 3562 | 2.1451 | 0.5988 |
77
+ | 0.4236 | 20.0 | 3750 | 2.1509 | 0.6003 |
78
+ | 0.4241 | 21.0 | 3937 | 2.1745 | 0.5987 |
79
+ | 0.4239 | 22.0 | 4125 | 2.1752 | 0.5991 |
80
+ | 0.4245 | 23.0 | 4312 | 2.1659 | 0.5983 |
81
+ | 0.4229 | 24.0 | 4500 | 2.2126 | 0.5981 |
82
+ | 0.4059 | 25.0 | 4687 | 2.1568 | 0.5997 |
83
+ | 0.4064 | 26.0 | 4875 | 2.1777 | 0.5979 |
84
+ | 0.4089 | 27.0 | 5062 | 2.2200 | 0.5979 |
85
+ | 0.4099 | 28.0 | 5250 | 2.2412 | 0.5976 |
86
+ | 0.4103 | 29.0 | 5437 | 2.2093 | 0.5983 |
87
+ | 0.4112 | 30.0 | 5625 | 2.2145 | 0.6002 |
88
+ | 0.4113 | 31.0 | 5812 | 2.2514 | 0.5990 |
89
+ | 0.4124 | 32.0 | 6000 | 2.3170 | 0.5979 |
90
+ | 0.3961 | 33.0 | 6187 | 2.2557 | 0.5978 |
91
+ | 0.4002 | 34.0 | 6375 | 2.2739 | 0.5979 |
92
+ | 0.3998 | 35.0 | 6562 | 2.2498 | 0.5976 |
93
+ | 0.4022 | 36.0 | 6750 | 2.3118 | 0.5972 |
94
+ | 0.4038 | 37.0 | 6937 | 2.3259 | 0.5970 |
95
+ | 0.404 | 38.0 | 7125 | 2.3276 | 0.5973 |
96
+ | 0.4072 | 39.0 | 7312 | 2.2854 | 0.5994 |
97
+ | 0.4077 | 40.0 | 7500 | 2.3036 | 0.5982 |
98
+ | 0.3943 | 41.0 | 7687 | 2.3361 | 0.5987 |
99
+ | 0.3939 | 42.0 | 7875 | 2.2148 | 0.5995 |
100
+ | 0.3977 | 43.0 | 8062 | 2.3393 | 0.5985 |
101
+ | 0.3988 | 44.0 | 8250 | 2.2875 | 0.5983 |
102
+ | 0.402 | 45.0 | 8437 | 2.2981 | 0.5995 |
103
+ | 0.4002 | 46.0 | 8625 | 2.3163 | 0.5981 |
104
+ | 0.4004 | 47.0 | 8812 | 2.3085 | 0.5987 |
105
+ | 0.402 | 48.0 | 9000 | 2.3341 | 0.5977 |
106
+ | 0.3895 | 49.0 | 9187 | 2.2953 | 0.5984 |
107
+ | 0.3927 | 49.87 | 9350 | 2.3327 | 0.5979 |
108
 
109
 
110
  ### Framework versions
111
 
112
+ - Transformers 4.34.0
 
113
  - Pytorch 2.1.0+cu121
114
+ - Datasets 2.18.0
115
+ - Tokenizers 0.14.1