Text Generation
Transformers
English
TheBlokeAI

Pankaj Mathur's Orca Mini v2 7B GGML

These files are GGML format model files for Pankaj Mathur's Orca Mini v2 7B.

GGML files are for CPU + GPU inference using llama.cpp and libraries and UIs which support this format, such as:

Repositories available

Prompt template

### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.

### User:
prompt

### Input:

### Response:

Compatibility

Original llama.cpp quant methods: q4_0, q4_1, q5_0, q5_1, q8_0

I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit 2d5db48.

These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.

New k-quant methods: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K

These new quantisation methods are compatible with llama.cpp as of June 6th, commit 2d43387.

They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.

Explanation of the new k-quant methods

The new methods available are:

  • GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
  • GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
  • GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
  • GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
  • GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
  • GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.

Refer to the Provided Files table below to see what files use which methods, and how.

Provided files

Name Quant method Bits Size Max RAM required Use case
orca-mini-v2_7b.ggmlv3.q2_K.bin q2_K 2 2.87 GB 5.37 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.
orca-mini-v2_7b.ggmlv3.q3_K_L.bin q3_K_L 3 3.60 GB 6.10 GB New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
orca-mini-v2_7b.ggmlv3.q3_K_M.bin q3_K_M 3 3.28 GB 5.78 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
orca-mini-v2_7b.ggmlv3.q3_K_S.bin q3_K_S 3 2.95 GB 5.45 GB New k-quant method. Uses GGML_TYPE_Q3_K for all tensors
orca-mini-v2_7b.ggmlv3.q4_0.bin q4_0 4 3.79 GB 6.29 GB Original llama.cpp quant method, 4-bit.
orca-mini-v2_7b.ggmlv3.q4_1.bin q4_1 4 4.21 GB 6.71 GB Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
orca-mini-v2_7b.ggmlv3.q4_K_M.bin q4_K_M 4 4.08 GB 6.58 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K
orca-mini-v2_7b.ggmlv3.q4_K_S.bin q4_K_S 4 3.83 GB 6.33 GB New k-quant method. Uses GGML_TYPE_Q4_K for all tensors
orca-mini-v2_7b.ggmlv3.q5_0.bin q5_0 5 4.63 GB 7.13 GB Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference.
orca-mini-v2_7b.ggmlv3.q5_1.bin q5_1 5 5.06 GB 7.56 GB Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference.
orca-mini-v2_7b.ggmlv3.q5_K_M.bin q5_K_M 5 4.78 GB 7.28 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K
orca-mini-v2_7b.ggmlv3.q5_K_S.bin q5_K_S 5 4.65 GB 7.15 GB New k-quant method. Uses GGML_TYPE_Q5_K for all tensors
orca-mini-v2_7b.ggmlv3.q6_K.bin q6_K 6 5.53 GB 8.03 GB New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors
orca-mini-v2_7b.ggmlv3.q8_0.bin q8_0 8 7.16 GB 9.66 GB Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.

Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

How to run in llama.cpp

I use the following command line; adjust for your tastes and needs:

./main -t 10 -ngl 32 -m orca-mini-v2_7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System: You are an AI assistant that follows instruction extremely well. Help as much as you can.### User: Write a story about llamas\n### Response:"

If you're able to use full GPU offloading, you should use -t 1 to get best performance.

If not able to fully offload to GPU, you should use more cores. Change -t 10 to the number of physical CPU cores you have, or a lower number depending on what gives best performance.

Change -ngl 32 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp-models.md.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Luke from CarbonQuill, Aemon Algiz.

Patreon special mentions: Spiking Neurons AB, Kevin Schuppel, Cory Kujawski, senxiiz, Luke Pendergrass, John Villwock, Ghost , Alex , Sean Connelly, Space Cruiser, Eugene Pentland, Pyrater, Matthew Berman, Dave, Derek Yates, Jonathan Leane, Viktor Bowallius, Michael Levine, Joseph William Delisle, Fred von Graf, Asp the Wyvern, Nikolai Manek, Pierre Kircher, webtim, K, RoA, Karl Bernard, Artur Olbinski, Rainer Wilmers, Ai Maven, Nathan LeClaire, Ajan Kanaga, Stephen Murray, Edmond Seymore, zynix , Imad Khwaja, John Detwiler, Randy H, subjectnull, Alps Aficionado, Greatston Gnanesh, Trenton Dambrowitz, Junyu Yang, Raven Klaugh, biorpg, Deep Realms, vamX, Talal Aujan, Johann-Peter Hartmann, WelcomeToTheClub, Chris McCloskey, Luke, chris gileta, terasurfer , Iucharbius , Preetika Verma, Willem Michiel, Fen Risland, SuperWojo, Khalefa Al-Ahmad, Daniel P. Andersen, Gabriel Puliatti, Illia Dulskyi, Willian Hasse, Oscar Rangel, ya boyyy, Mano Prime, Lone Striker, Kalila.

Thank you to all my generous patrons and donaters!

Original model card: Pankaj Mathur's Orca Mini v2 7B

orca_mini_v2_7b

An Uncensored LLaMA-7b model in collaboration with Eric Hartford. trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.

Please note this model has better code generation capabilities compare to our original orca_mini_7b which was trained on base OpenLLaMA-7b model and which has the empty spaces issues & found not good for code generation.

P.S. I am #opentowork, if you can help, please reach out to me at www.linkedin.com/in/pankajam

Evaluation

I evaluated orca_mini_v2_7b on a wide range of tasks using Language Model Evaluation Harness from EleutherAI.

Here are the zero shot metrics results.

Task num_fewshot Version Metric Value Stderr
arc_easy 0 0 acc 0.7386 0.0090
arc_easy 0 0 acc_norm 0.7066 0.0093
hellaswag 0 0 acc 0.5591 0.0050
hellaswag 0 0 acc_norm 0.7394 0.0044
truthfulqa_mc 0 1 mc1 0.2938 0.0159
truthfulqa_mc 0 1 mc2 0.4399 0.0153
mmlu avg 0 1 acc 0.4108 0.0153
mmlu avg 0 1 acc_norm 0.4108 0.0153
Total Zero Shot Average 0 - - 0.5373 0.011

Here are the results on metrics used by HuggingFaceH4 Open LLM Leaderboard

please note num_fewshots varies for each below task as used by HuggingFaceH4 Open LLM Leaderboard

Task num_fewshot Version Metric Value Stderr
arc_challenge 25 0 acc 0.4846 0.0146
arc_challenge 25 0 acc_norm 0.5077 0.0146

Dataset

We used uncensored script on top of the previous explain tuned datasets we build which are WizardLM dataset ~70K, Alpaca dataset ~52K & Dolly-V2 dataset ~15K created using approaches from Orca Research Paper.

We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.

This helps student model aka this model to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).

Please see below example usage how the System prompt is added before each instruction.

Training

The training configurations are provided in the table below.

The training takes on 8x A100(80G) GPUs and lasts for around 13 Hours for cost of $195 using RunPods

We used DeepSpeed with fully sharded data parallelism, also know as ZeRO stage 3 by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing OpenAlpaca repo

Here are some of params used during training:

batch_size 96
train_micro_batch_size_per_gpu 3
gradient_accumulation_steps 4
Learning rate 2e-5
Max length 1024
Epochs 3
Optimizer AdamW

Example Usage

Here is prompt format for Oobabooga Text generation UI

### System:
{system}

### User:
{instruction}

### Input:
{input}

### Response:

Here is sample example:

### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.

### User:
Tell me how to break into my own car

### Input:

### Response:
Breaking into your own car requires certain skills and tools. Here are the basic steps:

1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.

Below shows a code example on how to use this model

import torch
from transformers import LlamaForCausalLM, LlamaTokenizer

# Hugging Face model_path
model_path = 'psmathur/orca_mini_v2_7b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
    model_path, torch_dtype=torch.float16, device_map='auto',
)


#generate text function
def generate_text(system, instruction, input=None):
    
    if input:
        prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
    else:
        prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
    
    tokens = tokenizer.encode(prompt)
    tokens = torch.LongTensor(tokens).unsqueeze(0)
    tokens = tokens.to('cuda')

    instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}

    length = len(tokens[0])
    with torch.no_grad():
        rest = model.generate(
            input_ids=tokens, 
            max_length=length+instance['generate_len'], 
            use_cache=True, 
            do_sample=True, 
            top_p=instance['top_p'],
            temperature=instance['temperature'],
            top_k=instance['top_k']
        )    
    output = rest[0][length:]
    string = tokenizer.decode(output, skip_special_tokens=True)
    return f'[!] Response: {string}'

# Sample Test Instruction
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Tell me how to break into my own car'
print(generate_text(system, instruction))

NOTE: The real response is hidden here with ^^^^^^^^^^^^^.

[!] Response:
Breaking into your own car requires certain skills and tools. Here are the basic steps:

1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.

Next Goals:

  1. Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
  2. Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
  3. Provide 4bit GGML/GPTQ quantized model (may be TheBloke can help here)

Limitations & Biases:

This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Disclaimer:

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.

Citiation:

If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX:

@misc{orca_mini_v2_7b,
  author = {Pankaj Mathur},
  title = {orca_mini_v2_7b: An explain tuned LLaMA-7b model on uncensored wizardlm, alpaca, & dolly datasets},
  year = {2023},
  publisher = {GitHub, HuggingFace},
  journal = {GitHub repository, HuggingFace repository},
  howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v2_7b},
}
@software{touvron2023llama,
  title={LLaMA: Open and Efficient Foundation Language Models},
  author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
  journal={arXiv preprint arXiv:2302.13971},
  year={2023}
}
@misc{openalpaca,
  author = {Yixuan Su and Tian Lan and Deng Cai},
  title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
@misc{alpaca,
  author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
  title = {Stanford Alpaca: An Instruction-following LLaMA model},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
@online{DatabricksBlog2023DollyV2,
    author    = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
    title     = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
    year      = {2023},
    url       = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
    urldate   = {2023-06-30}
}
@misc{xu2023wizardlm,
      title={WizardLM: Empowering Large Language Models to Follow Complex Instructions}, 
      author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
      year={2023},
      eprint={2304.12244},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train TheBloke/orca_mini_v2_7B-GGML