Files changed (2) hide show
  1. README.md +0 -276
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -4,259 +4,6 @@ library_name: peft
4
  ## Training procedure
5
 
6
 
7
- The following `bitsandbytes` quantization config was used during training:
8
- - load_in_8bit: False
9
- - load_in_4bit: True
10
- - llm_int8_threshold: 6.0
11
- - llm_int8_skip_modules: None
12
- - llm_int8_enable_fp32_cpu_offload: False
13
- - llm_int8_has_fp16_weight: False
14
- - bnb_4bit_quant_type: nf4
15
- - bnb_4bit_use_double_quant: True
16
- - bnb_4bit_compute_dtype: bfloat16
17
-
18
- The following `bitsandbytes` quantization config was used during training:
19
- - load_in_8bit: False
20
- - load_in_4bit: True
21
- - llm_int8_threshold: 6.0
22
- - llm_int8_skip_modules: None
23
- - llm_int8_enable_fp32_cpu_offload: False
24
- - llm_int8_has_fp16_weight: False
25
- - bnb_4bit_quant_type: nf4
26
- - bnb_4bit_use_double_quant: True
27
- - bnb_4bit_compute_dtype: bfloat16
28
-
29
- The following `bitsandbytes` quantization config was used during training:
30
- - load_in_8bit: False
31
- - load_in_4bit: True
32
- - llm_int8_threshold: 6.0
33
- - llm_int8_skip_modules: None
34
- - llm_int8_enable_fp32_cpu_offload: False
35
- - llm_int8_has_fp16_weight: False
36
- - bnb_4bit_quant_type: nf4
37
- - bnb_4bit_use_double_quant: True
38
- - bnb_4bit_compute_dtype: bfloat16
39
-
40
- The following `bitsandbytes` quantization config was used during training:
41
- - load_in_8bit: False
42
- - load_in_4bit: True
43
- - llm_int8_threshold: 6.0
44
- - llm_int8_skip_modules: None
45
- - llm_int8_enable_fp32_cpu_offload: False
46
- - llm_int8_has_fp16_weight: False
47
- - bnb_4bit_quant_type: nf4
48
- - bnb_4bit_use_double_quant: True
49
- - bnb_4bit_compute_dtype: bfloat16
50
-
51
- The following `bitsandbytes` quantization config was used during training:
52
- - load_in_8bit: False
53
- - load_in_4bit: True
54
- - llm_int8_threshold: 6.0
55
- - llm_int8_skip_modules: None
56
- - llm_int8_enable_fp32_cpu_offload: False
57
- - llm_int8_has_fp16_weight: False
58
- - bnb_4bit_quant_type: nf4
59
- - bnb_4bit_use_double_quant: True
60
- - bnb_4bit_compute_dtype: bfloat16
61
-
62
- The following `bitsandbytes` quantization config was used during training:
63
- - load_in_8bit: False
64
- - load_in_4bit: True
65
- - llm_int8_threshold: 6.0
66
- - llm_int8_skip_modules: None
67
- - llm_int8_enable_fp32_cpu_offload: False
68
- - llm_int8_has_fp16_weight: False
69
- - bnb_4bit_quant_type: nf4
70
- - bnb_4bit_use_double_quant: True
71
- - bnb_4bit_compute_dtype: bfloat16
72
-
73
- The following `bitsandbytes` quantization config was used during training:
74
- - load_in_8bit: False
75
- - load_in_4bit: True
76
- - llm_int8_threshold: 6.0
77
- - llm_int8_skip_modules: None
78
- - llm_int8_enable_fp32_cpu_offload: False
79
- - llm_int8_has_fp16_weight: False
80
- - bnb_4bit_quant_type: nf4
81
- - bnb_4bit_use_double_quant: True
82
- - bnb_4bit_compute_dtype: bfloat16
83
-
84
- The following `bitsandbytes` quantization config was used during training:
85
- - load_in_8bit: False
86
- - load_in_4bit: True
87
- - llm_int8_threshold: 6.0
88
- - llm_int8_skip_modules: None
89
- - llm_int8_enable_fp32_cpu_offload: False
90
- - llm_int8_has_fp16_weight: False
91
- - bnb_4bit_quant_type: nf4
92
- - bnb_4bit_use_double_quant: True
93
- - bnb_4bit_compute_dtype: bfloat16
94
-
95
- The following `bitsandbytes` quantization config was used during training:
96
- - load_in_8bit: False
97
- - load_in_4bit: True
98
- - llm_int8_threshold: 6.0
99
- - llm_int8_skip_modules: None
100
- - llm_int8_enable_fp32_cpu_offload: False
101
- - llm_int8_has_fp16_weight: False
102
- - bnb_4bit_quant_type: nf4
103
- - bnb_4bit_use_double_quant: True
104
- - bnb_4bit_compute_dtype: bfloat16
105
-
106
- The following `bitsandbytes` quantization config was used during training:
107
- - load_in_8bit: False
108
- - load_in_4bit: True
109
- - llm_int8_threshold: 6.0
110
- - llm_int8_skip_modules: None
111
- - llm_int8_enable_fp32_cpu_offload: False
112
- - llm_int8_has_fp16_weight: False
113
- - bnb_4bit_quant_type: nf4
114
- - bnb_4bit_use_double_quant: True
115
- - bnb_4bit_compute_dtype: bfloat16
116
-
117
- The following `bitsandbytes` quantization config was used during training:
118
- - load_in_8bit: False
119
- - load_in_4bit: True
120
- - llm_int8_threshold: 6.0
121
- - llm_int8_skip_modules: None
122
- - llm_int8_enable_fp32_cpu_offload: False
123
- - llm_int8_has_fp16_weight: False
124
- - bnb_4bit_quant_type: nf4
125
- - bnb_4bit_use_double_quant: True
126
- - bnb_4bit_compute_dtype: bfloat16
127
-
128
- The following `bitsandbytes` quantization config was used during training:
129
- - load_in_8bit: False
130
- - load_in_4bit: True
131
- - llm_int8_threshold: 6.0
132
- - llm_int8_skip_modules: None
133
- - llm_int8_enable_fp32_cpu_offload: False
134
- - llm_int8_has_fp16_weight: False
135
- - bnb_4bit_quant_type: nf4
136
- - bnb_4bit_use_double_quant: True
137
- - bnb_4bit_compute_dtype: bfloat16
138
-
139
- The following `bitsandbytes` quantization config was used during training:
140
- - load_in_8bit: False
141
- - load_in_4bit: True
142
- - llm_int8_threshold: 6.0
143
- - llm_int8_skip_modules: None
144
- - llm_int8_enable_fp32_cpu_offload: False
145
- - llm_int8_has_fp16_weight: False
146
- - bnb_4bit_quant_type: nf4
147
- - bnb_4bit_use_double_quant: True
148
- - bnb_4bit_compute_dtype: bfloat16
149
-
150
- The following `bitsandbytes` quantization config was used during training:
151
- - load_in_8bit: False
152
- - load_in_4bit: True
153
- - llm_int8_threshold: 6.0
154
- - llm_int8_skip_modules: None
155
- - llm_int8_enable_fp32_cpu_offload: False
156
- - llm_int8_has_fp16_weight: False
157
- - bnb_4bit_quant_type: nf4
158
- - bnb_4bit_use_double_quant: True
159
- - bnb_4bit_compute_dtype: bfloat16
160
-
161
- The following `bitsandbytes` quantization config was used during training:
162
- - load_in_8bit: False
163
- - load_in_4bit: True
164
- - llm_int8_threshold: 6.0
165
- - llm_int8_skip_modules: None
166
- - llm_int8_enable_fp32_cpu_offload: False
167
- - llm_int8_has_fp16_weight: False
168
- - bnb_4bit_quant_type: nf4
169
- - bnb_4bit_use_double_quant: True
170
- - bnb_4bit_compute_dtype: bfloat16
171
-
172
- The following `bitsandbytes` quantization config was used during training:
173
- - load_in_8bit: False
174
- - load_in_4bit: True
175
- - llm_int8_threshold: 6.0
176
- - llm_int8_skip_modules: None
177
- - llm_int8_enable_fp32_cpu_offload: False
178
- - llm_int8_has_fp16_weight: False
179
- - bnb_4bit_quant_type: nf4
180
- - bnb_4bit_use_double_quant: True
181
- - bnb_4bit_compute_dtype: bfloat16
182
-
183
- The following `bitsandbytes` quantization config was used during training:
184
- - load_in_8bit: False
185
- - load_in_4bit: True
186
- - llm_int8_threshold: 6.0
187
- - llm_int8_skip_modules: None
188
- - llm_int8_enable_fp32_cpu_offload: False
189
- - llm_int8_has_fp16_weight: False
190
- - bnb_4bit_quant_type: nf4
191
- - bnb_4bit_use_double_quant: True
192
- - bnb_4bit_compute_dtype: bfloat16
193
-
194
- The following `bitsandbytes` quantization config was used during training:
195
- - load_in_8bit: False
196
- - load_in_4bit: True
197
- - llm_int8_threshold: 6.0
198
- - llm_int8_skip_modules: None
199
- - llm_int8_enable_fp32_cpu_offload: False
200
- - llm_int8_has_fp16_weight: False
201
- - bnb_4bit_quant_type: nf4
202
- - bnb_4bit_use_double_quant: True
203
- - bnb_4bit_compute_dtype: bfloat16
204
-
205
- The following `bitsandbytes` quantization config was used during training:
206
- - load_in_8bit: False
207
- - load_in_4bit: True
208
- - llm_int8_threshold: 6.0
209
- - llm_int8_skip_modules: None
210
- - llm_int8_enable_fp32_cpu_offload: False
211
- - llm_int8_has_fp16_weight: False
212
- - bnb_4bit_quant_type: nf4
213
- - bnb_4bit_use_double_quant: True
214
- - bnb_4bit_compute_dtype: bfloat16
215
-
216
- The following `bitsandbytes` quantization config was used during training:
217
- - load_in_8bit: False
218
- - load_in_4bit: True
219
- - llm_int8_threshold: 6.0
220
- - llm_int8_skip_modules: None
221
- - llm_int8_enable_fp32_cpu_offload: False
222
- - llm_int8_has_fp16_weight: False
223
- - bnb_4bit_quant_type: nf4
224
- - bnb_4bit_use_double_quant: True
225
- - bnb_4bit_compute_dtype: bfloat16
226
-
227
- The following `bitsandbytes` quantization config was used during training:
228
- - load_in_8bit: False
229
- - load_in_4bit: True
230
- - llm_int8_threshold: 6.0
231
- - llm_int8_skip_modules: None
232
- - llm_int8_enable_fp32_cpu_offload: False
233
- - llm_int8_has_fp16_weight: False
234
- - bnb_4bit_quant_type: nf4
235
- - bnb_4bit_use_double_quant: True
236
- - bnb_4bit_compute_dtype: bfloat16
237
-
238
- The following `bitsandbytes` quantization config was used during training:
239
- - load_in_8bit: False
240
- - load_in_4bit: True
241
- - llm_int8_threshold: 6.0
242
- - llm_int8_skip_modules: None
243
- - llm_int8_enable_fp32_cpu_offload: False
244
- - llm_int8_has_fp16_weight: False
245
- - bnb_4bit_quant_type: nf4
246
- - bnb_4bit_use_double_quant: True
247
- - bnb_4bit_compute_dtype: bfloat16
248
-
249
- The following `bitsandbytes` quantization config was used during training:
250
- - load_in_8bit: False
251
- - load_in_4bit: True
252
- - llm_int8_threshold: 6.0
253
- - llm_int8_skip_modules: None
254
- - llm_int8_enable_fp32_cpu_offload: False
255
- - llm_int8_has_fp16_weight: False
256
- - bnb_4bit_quant_type: nf4
257
- - bnb_4bit_use_double_quant: True
258
- - bnb_4bit_compute_dtype: bfloat16
259
-
260
  The following `bitsandbytes` quantization config was used during training:
261
  - load_in_8bit: False
262
  - load_in_4bit: True
@@ -269,28 +16,5 @@ The following `bitsandbytes` quantization config was used during training:
269
  - bnb_4bit_compute_dtype: bfloat16
270
  ### Framework versions
271
 
272
- - PEFT 0.4.0.dev0
273
- - PEFT 0.4.0.dev0
274
- - PEFT 0.4.0.dev0
275
- - PEFT 0.4.0.dev0
276
- - PEFT 0.4.0.dev0
277
- - PEFT 0.4.0.dev0
278
- - PEFT 0.4.0.dev0
279
- - PEFT 0.4.0.dev0
280
- - PEFT 0.4.0.dev0
281
- - PEFT 0.4.0.dev0
282
- - PEFT 0.4.0.dev0
283
- - PEFT 0.4.0.dev0
284
- - PEFT 0.4.0.dev0
285
- - PEFT 0.4.0.dev0
286
- - PEFT 0.4.0.dev0
287
- - PEFT 0.4.0.dev0
288
- - PEFT 0.4.0.dev0
289
- - PEFT 0.4.0.dev0
290
- - PEFT 0.4.0.dev0
291
- - PEFT 0.4.0.dev0
292
- - PEFT 0.4.0.dev0
293
- - PEFT 0.4.0.dev0
294
- - PEFT 0.4.0.dev0
295
 
296
  - PEFT 0.4.0.dev0
 
4
  ## Training procedure
5
 
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  The following `bitsandbytes` quantization config was used during training:
8
  - load_in_8bit: False
9
  - load_in_4bit: True
 
16
  - bnb_4bit_compute_dtype: bfloat16
17
  ### Framework versions
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  - PEFT 0.4.0.dev0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:54a4988f590c3f8b030c392a0ca0cf51a66a38663e31b81b4641c640ed9e29ff
3
  size 18898161
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d182a37a225c2a9fec266162ce449dff533676f12bf7db0d2a15ed281274e96c
3
  size 18898161