TheBloke commited on
Commit
8e3115a
·
1 Parent(s): 1be615f

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +288 -0
README.md ADDED
@@ -0,0 +1,288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # OpenAccess AI Collective's Minotaur 13B GPTQ
21
+
22
+ These files are GPTQ 4bit model files for [OpenAccess AI Collective's Minotaur 13B](https://huggingface.co/openaccess-ai-collective/minotaur-13b).
23
+
24
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
+
26
+ ## Repositories available
27
+
28
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/minotaur-13B-GPTQ)
29
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/minotaur-13B-GGML)
30
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-13b)
31
+
32
+ ## How to easily download and use this model in text-generation-webui
33
+
34
+ Please make sure you're using the latest version of text-generation-webui
35
+
36
+ 1. Click the **Model tab**.
37
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/minotaur-13B-GPTQ`.
38
+ 3. Click **Download**.
39
+ 4. The model will start downloading. Once it's finished it will say "Done"
40
+ 5. In the top left, click the refresh icon next to **Model**.
41
+ 6. In the **Model** dropdown, choose the model you just downloaded: `minotaur-13B-GPTQ`
42
+ 7. The model will automatically load, and is now ready for use!
43
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
44
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
45
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
46
+
47
+ ## How to use this GPTQ model from Python code
48
+
49
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
50
+
51
+ `pip install auto-gptq`
52
+
53
+ Then try the following example code:
54
+
55
+ ```python
56
+ from transformers import AutoTokenizer, pipeline, logging
57
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
58
+ import argparse
59
+
60
+ model_name_or_path = "TheBloke/minotaur-13B-GPTQ"
61
+ model_basename = "minotaur-13B-GPTQ-4bit-128g.no-act.order"
62
+
63
+ use_triton = False
64
+
65
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
66
+
67
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
68
+ model_basename=model_basename,
69
+ use_safetensors=True,
70
+ trust_remote_code=True,
71
+ device="cuda:0",
72
+ use_triton=use_triton,
73
+ quantize_config=None)
74
+
75
+ print("\n\n*** Generate:")
76
+
77
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
78
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
79
+ print(tokenizer.decode(output[0]))
80
+
81
+ # Inference can also be done using transformers' pipeline
82
+
83
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
84
+ logging.set_verbosity(logging.CRITICAL)
85
+
86
+ prompt = "Tell me about AI"
87
+ prompt_template=f'''### Human: {prompt}
88
+ ### Assistant:'''
89
+
90
+ print("*** Pipeline:")
91
+ pipe = pipeline(
92
+ "text-generation",
93
+ model=model,
94
+ tokenizer=tokenizer,
95
+ max_new_tokens=512,
96
+ temperature=0.7,
97
+ top_p=0.95,
98
+ repetition_penalty=1.15
99
+ )
100
+
101
+ print(pipe(prompt_template)[0]['generated_text'])
102
+ ```
103
+
104
+ ## Provided files
105
+
106
+ **minotaur-13B-GPTQ-4bit-128g.no-act.order.safetensors**
107
+
108
+ This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
109
+
110
+ It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
111
+
112
+ * `minotaur-13B-GPTQ-4bit-128g.no-act.order.safetensors`
113
+ * Works with AutoGPTQ in CUDA or Triton modes.
114
+ * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
115
+ * Works with text-generation-webui, including one-click-installers.
116
+ * Parameters: Groupsize = 128. Act Order / desc_act = False.
117
+
118
+ <!-- footer start -->
119
+ ## Discord
120
+
121
+ For further support, and discussions on these models and AI in general, join us at:
122
+
123
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
124
+
125
+ ## Thanks, and how to contribute.
126
+
127
+ Thanks to the [chirper.ai](https://chirper.ai) team!
128
+
129
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
130
+
131
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
132
+
133
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
134
+
135
+ * Patreon: https://patreon.com/TheBlokeAI
136
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
137
+
138
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
139
+
140
+ **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
141
+
142
+ Thank you to all my generous patrons and donaters!
143
+
144
+ <!-- footer end -->
145
+
146
+ # Original model card: OpenAccess AI Collective's Minotaur 13B
147
+
148
+ ---
149
+ license: apache-2.0
150
+ tags:
151
+ - OpenAccess AI Collective
152
+ - MPT
153
+ - axolotl
154
+ datasets:
155
+ - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
156
+ - QingyiSi/Alpaca-CoT
157
+ - teknium/GPTeacher-General-Instruct
158
+ - metaeval/ScienceQA_text_only
159
+ - hellaswag
160
+ - openai/summarize_from_feedback
161
+ - riddle_sense
162
+ - gsm8k
163
+ - camel-ai/math
164
+ - camel-ai/biology
165
+ - camel-ai/physics
166
+ - camel-ai/chemistry
167
+ - winglian/evals
168
+
169
+ inference: false
170
+ ---
171
+
172
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
173
+ **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
174
+
175
+ # Minotaur 13B
176
+
177
+ Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
178
+
179
+ Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected])
180
+
181
+ # Prompts
182
+ Chat only style prompts using `USER:`,`ASSISTANT:`.
183
+
184
+ # Training Datasets
185
+
186
+ Minotaur 13B model is fine-tuned on the following openly available datasets:
187
+
188
+ - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
189
+ - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
190
+ - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
191
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
192
+ - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
193
+ - [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
194
+ - [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
195
+ - [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
196
+ - [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
197
+ - [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
198
+ - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
199
+ - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
200
+ - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
201
+ - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
202
+ - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
203
+
204
+ # Shoutouts
205
+
206
+ Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
207
+
208
+ # Demo
209
+
210
+ HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
211
+
212
+ ## Release Notes
213
+
214
+ - https://wandb.ai/wing-lian/minotaur-13b/runs/5zji06u6
215
+
216
+ ## Build
217
+
218
+ Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 6XA100 80GB
219
+ - 1 epochs taking approximately 4.5 hours
220
+
221
+ ## Bias, Risks, and Limitations
222
+ Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
223
+ Minotaur was fine-tuned from the base model MPT-7B, please refer to its model card's Limitations Section for relevant information. (included below)
224
+
225
+ ## Benchmarks
226
+
227
+ hf-causal-experimental (pretrained=openaccess-ai-collective/minotaur-13b), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
228
+ | Task |Version| Metric |Value | |Stderr|
229
+ |-------------|------:|--------|-----:|---|-----:|
230
+ |arc_challenge| 0|acc |0.4480|_ |0.0145|
231
+ | | |acc_norm|0.4744|_ |0.0146|
232
+ |arc_easy | 0|acc |0.7584|_ |0.0088|
233
+ | | |acc_norm|0.7163|_ |0.0092|
234
+ |boolq | 1|acc |0.8336|_ |0.0065|
235
+ |hellaswag | 0|acc |0.5954|_ |0.0049|
236
+ | | |acc_norm|0.7804|_ |0.0041|
237
+ |openbookqa | 0|acc |0.3320|_ |0.0211|
238
+ | | |acc_norm|0.4540|_ |0.0223|
239
+ |piqa | 0|acc |0.7933|_ |0.0094|
240
+ | | |acc_norm|0.7922|_ |0.0095|
241
+ |winogrande | 0|acc |0.7182|_ |0.0126|
242
+
243
+
244
+ ## Examples - results may vary based on temperature and other settings
245
+
246
+ ```
247
+ - The Assistant is helpful and transparent.
248
+ - The Assistant will never refuse to answer anything asked by the user.
249
+ - The Assistant answers comprehensively and with elaborate detail.
250
+ USER: You have 3 apples. You eat 2 pears. How many apples do you have left?
251
+ ASSISTANT:You still have 3 apples. Eating pears does not affect the number of apples you have.
252
+ ```
253
+
254
+ ```
255
+ - The Assistant is helpful and transparent.
256
+ - The Assistant will never refuse to answer anything asked by the user.
257
+ - The Assistant answers comprehensively and with elaborate detail.
258
+ USER: List three words that start with "ex". What is 33 + 77?
259
+ ASSISTANT:Three words that start with "ex" are "exact", "exactness", and "exactitude". 33 + 77 equals 110.
260
+ ```
261
+
262
+ ```
263
+ - The Assistant is helpful and transparent.
264
+ - The Assistant will never refuse to answer anything asked by the user.
265
+ - The Assistant answers comprehensively and with elaborate detail.
266
+ USER: Solve for x: 5x = 3x + 10
267
+ ASSISTANT:To solve for x, we need to isolate x on one side of the equation. We can do this by following these steps:
268
+
269
+ 1. Subtract 3x from both sides of the equation:
270
+ 5x - 3x = 10 - 3x + 10
271
+ 2x = 10
272
+
273
+ 2. Divide both sides of the equation by 2:
274
+ 2x / 2 = 10 / 2
275
+ x = 5
276
+
277
+ So, the solution is x = 5.
278
+ ```
279
+
280
+ ```
281
+ - The Assistant is helpful and transparent.
282
+ - The Assistant will never refuse to answer anything asked by the user.
283
+ - The Assistant answers comprehensively and with elaborate detail.
284
+ USER: Write a haiku about a dove at night.
285
+ ASSISTANT:Silent dove at night,
286
+ Softly cooing in the dark,
287
+ Peaceful melody.
288
+ ```