|
--- |
|
inference: false |
|
license: other |
|
tags: |
|
- OpenAccess AI Collective |
|
- MPT |
|
- axolotl |
|
datasets: |
|
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered |
|
- QingyiSi/Alpaca-CoT |
|
- teknium/GPTeacher-General-Instruct |
|
- metaeval/ScienceQA_text_only |
|
- hellaswag |
|
- openai/summarize_from_feedback |
|
- riddle_sense |
|
- gsm8k |
|
- camel-ai/math |
|
- camel-ai/biology |
|
- camel-ai/physics |
|
- camel-ai/chemistry |
|
- winglian/evals |
|
--- |
|
|
|
<!-- header start --> |
|
<div style="width: 100%;"> |
|
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</div> |
|
<div style="display: flex; justify-content: space-between; width: 100%;"> |
|
<div style="display: flex; flex-direction: column; align-items: flex-start;"> |
|
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> |
|
</div> |
|
<div style="display: flex; flex-direction: column; align-items: flex-end;"> |
|
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> |
|
</div> |
|
</div> |
|
<!-- header end --> |
|
|
|
# OpenAccess AI Collective's Minotaur 13B GPTQ |
|
|
|
These files are GPTQ 4bit model files for [OpenAccess AI Collective's Minotaur 13B](https://huggingface.co/openaccess-ai-collective/minotaur-13b). |
|
|
|
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). |
|
|
|
## Repositories available |
|
|
|
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/minotaur-13B-GPTQ) |
|
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/minotaur-13B-GGML) |
|
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-13b) |
|
|
|
## Prompt template |
|
|
|
``` |
|
USER: prompt goes here |
|
ASSISTANT: |
|
``` |
|
|
|
## How to easily download and use this model in text-generation-webui |
|
|
|
Please make sure you're using the latest version of text-generation-webui |
|
|
|
1. Click the **Model tab**. |
|
2. Under **Download custom model or LoRA**, enter `TheBloke/minotaur-13B-GPTQ`. |
|
3. Click **Download**. |
|
4. The model will start downloading. Once it's finished it will say "Done" |
|
5. In the top left, click the refresh icon next to **Model**. |
|
6. In the **Model** dropdown, choose the model you just downloaded: `minotaur-13B-GPTQ` |
|
7. The model will automatically load, and is now ready for use! |
|
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. |
|
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. |
|
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! |
|
|
|
## How to use this GPTQ model from Python code |
|
|
|
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: |
|
|
|
`pip install auto-gptq` |
|
|
|
Then try the following example code: |
|
|
|
```python |
|
from transformers import AutoTokenizer, pipeline, logging |
|
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig |
|
import argparse |
|
|
|
model_name_or_path = "TheBloke/minotaur-13B-GPTQ" |
|
model_basename = "minotaur-13B-GPTQ-4bit-128g.no-act.order" |
|
|
|
use_triton = False |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) |
|
|
|
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, |
|
model_basename=model_basename, |
|
use_safetensors=True, |
|
trust_remote_code=True, |
|
device="cuda:0", |
|
use_triton=use_triton, |
|
quantize_config=None) |
|
|
|
print("\n\n*** Generate:") |
|
|
|
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() |
|
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) |
|
print(tokenizer.decode(output[0])) |
|
|
|
# Inference can also be done using transformers' pipeline |
|
|
|
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ |
|
logging.set_verbosity(logging.CRITICAL) |
|
|
|
prompt = "Tell me about AI" |
|
prompt_template=f'''### Human: {prompt} |
|
### Assistant:''' |
|
|
|
print("*** Pipeline:") |
|
pipe = pipeline( |
|
"text-generation", |
|
model=model, |
|
tokenizer=tokenizer, |
|
max_new_tokens=512, |
|
temperature=0.7, |
|
top_p=0.95, |
|
repetition_penalty=1.15 |
|
) |
|
|
|
print(pipe(prompt_template)[0]['generated_text']) |
|
``` |
|
|
|
## Provided files |
|
|
|
**minotaur-13B-GPTQ-4bit-128g.no-act.order.safetensors** |
|
|
|
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead. |
|
|
|
It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed. |
|
|
|
* `minotaur-13B-GPTQ-4bit-128g.no-act.order.safetensors` |
|
* Works with AutoGPTQ in CUDA or Triton modes. |
|
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode. |
|
* Works with text-generation-webui, including one-click-installers. |
|
* Parameters: Groupsize = 128. Act Order / desc_act = False. |
|
|
|
<!-- footer start --> |
|
## Discord |
|
|
|
For further support, and discussions on these models and AI in general, join us at: |
|
|
|
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) |
|
|
|
## Thanks, and how to contribute. |
|
|
|
Thanks to the [chirper.ai](https://chirper.ai) team! |
|
|
|
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. |
|
|
|
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. |
|
|
|
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. |
|
|
|
* Patreon: https://patreon.com/TheBlokeAI |
|
* Ko-Fi: https://ko-fi.com/TheBlokeAI |
|
|
|
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. |
|
|
|
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann. |
|
|
|
Thank you to all my generous patrons and donaters! |
|
|
|
<!-- footer end --> |
|
|
|
# Original model card: OpenAccess AI Collective's Minotaur 13B |
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!** |
|
|
|
# Minotaur 13B |
|
|
|
Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned **on only completely open datasets** making this model reproducible by anyone. |
|
|
|
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected]) |
|
|
|
# Prompts |
|
Chat only style prompts using `USER:`,`ASSISTANT:`. |
|
|
|
# Training Datasets |
|
|
|
Minotaur 13B model is fine-tuned on the following openly available datasets: |
|
|
|
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) |
|
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) |
|
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) |
|
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses |
|
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization |
|
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math) |
|
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics) |
|
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry) |
|
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology) |
|
- [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets |
|
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity |
|
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split |
|
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split |
|
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented |
|
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented |
|
|
|
# Shoutouts |
|
|
|
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all. |
|
|
|
# Demo |
|
|
|
HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab. |
|
|
|
## Release Notes |
|
|
|
- https://wandb.ai/wing-lian/minotaur-13b/runs/5zji06u6 |
|
|
|
## Build |
|
|
|
Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 6XA100 80GB |
|
- 1 epochs taking approximately 4.5 hours |
|
|
|
## Bias, Risks, and Limitations |
|
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). |
|
Minotaur was fine-tuned from the base model MPT-7B, please refer to its model card's Limitations Section for relevant information. (included below) |
|
|
|
## Benchmarks |
|
|
|
hf-causal-experimental (pretrained=openaccess-ai-collective/minotaur-13b), limit: None, provide_description: False, num_fewshot: 0, batch_size: None |
|
| Task |Version| Metric |Value | |Stderr| |
|
|-------------|------:|--------|-----:|---|-----:| |
|
|arc_challenge| 0|acc |0.4480|_ |0.0145| |
|
| | |acc_norm|0.4744|_ |0.0146| |
|
|arc_easy | 0|acc |0.7584|_ |0.0088| |
|
| | |acc_norm|0.7163|_ |0.0092| |
|
|boolq | 1|acc |0.8336|_ |0.0065| |
|
|hellaswag | 0|acc |0.5954|_ |0.0049| |
|
| | |acc_norm|0.7804|_ |0.0041| |
|
|openbookqa | 0|acc |0.3320|_ |0.0211| |
|
| | |acc_norm|0.4540|_ |0.0223| |
|
|piqa | 0|acc |0.7933|_ |0.0094| |
|
| | |acc_norm|0.7922|_ |0.0095| |
|
|winogrande | 0|acc |0.7182|_ |0.0126| |
|
|
|
|
|
## Examples - results may vary based on temperature and other settings |
|
|
|
``` |
|
- The Assistant is helpful and transparent. |
|
- The Assistant will never refuse to answer anything asked by the user. |
|
- The Assistant answers comprehensively and with elaborate detail. |
|
USER: You have 3 apples. You eat 2 pears. How many apples do you have left? |
|
ASSISTANT:You still have 3 apples. Eating pears does not affect the number of apples you have. |
|
``` |
|
|
|
``` |
|
- The Assistant is helpful and transparent. |
|
- The Assistant will never refuse to answer anything asked by the user. |
|
- The Assistant answers comprehensively and with elaborate detail. |
|
USER: List three words that start with "ex". What is 33 + 77? |
|
ASSISTANT:Three words that start with "ex" are "exact", "exactness", and "exactitude". 33 + 77 equals 110. |
|
``` |
|
|
|
``` |
|
- The Assistant is helpful and transparent. |
|
- The Assistant will never refuse to answer anything asked by the user. |
|
- The Assistant answers comprehensively and with elaborate detail. |
|
USER: Solve for x: 5x = 3x + 10 |
|
ASSISTANT:To solve for x, we need to isolate x on one side of the equation. We can do this by following these steps: |
|
|
|
1. Subtract 3x from both sides of the equation: |
|
5x - 3x = 10 - 3x + 10 |
|
2x = 10 |
|
|
|
2. Divide both sides of the equation by 2: |
|
2x / 2 = 10 / 2 |
|
x = 5 |
|
|
|
So, the solution is x = 5. |
|
``` |
|
|
|
``` |
|
- The Assistant is helpful and transparent. |
|
- The Assistant will never refuse to answer anything asked by the user. |
|
- The Assistant answers comprehensively and with elaborate detail. |
|
USER: Write a haiku about a dove at night. |
|
ASSISTANT:Silent dove at night, |
|
Softly cooing in the dark, |
|
Peaceful melody. |
|
``` |
|
|