File size: 6,697 Bytes
6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb 45d445f 2c8a105 6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb fe3bc19 6aa0aeb 58fe905 6aa0aeb a5a104b 6aa0aeb 6cd3974 6aa0aeb 0d484bb 6aa0aeb 5da5f16 6aa0aeb a5a104b 6aa0aeb da4134c 6aa0aeb a5a104b 6aa0aeb 58fe905 6aa0aeb da4134c 26df441 6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb 2c8a105 6aa0aeb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 |
---
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- Orca
- AWQ
inference: false
---
# orca_mini_v2_13b (4-bit 128g AWQ Quantized)
An **Uncensored** LLaMA-13b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford), trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click [here](https://github.com/mit-han-lab/llm-awq).
## Model Date
July 8, 2023
## Model License
Please refer to original Orca Mini v2 model license ([link](https://huggingface.co/psmathur/orca_mini_v2_13b)).
Please refer to the AWQ quantization license ([link](https://github.com/llm-awq/blob/main/LICENSE)).
## CUDA Version
This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of `8.0` or higher.
For Docker users, the `nvcr.io/nvidia/pytorch:23.06-py3` image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work.
## How to Use
```bash
git clone https://github.com/abhinavkulkarni/llm-awq \
&& cd llm-awq \
&& git checkout ba01560f21516805fc5ceba5c2566dcbd1cf66d8 \
&& pip install -e . \
&& cd awq/kernels \
&& python setup.py install
```
```python
import torch
from awq.quantize.quantizer import real_quantize_model_weight
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
from huggingface_hub import snapshot_download
model_name = "abhinavkulkarni/psmathur-orca_mini_v2_13b-w4-g128-awq"
# Config
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name)
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
# Model
w_bit = 4
q_config = {
"zero_point": True,
"q_group_size": 128,
}
load_quant = snapshot_download(model_name)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config=config,
torch_dtype=torch.float16, trust_remote_code=True)
real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True)
model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced")
# Inference
prompt = f'''What is the difference between nuclear fusion and fission?
###Response:'''
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()
output = model.generate(
inputs=input_ids,
temperature=0.7,
max_new_tokens=512,
top_p=0.15,
top_k=0,
repetition_penalty=1.1,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer)
```
## Evaluation
This evaluation was done using [LM-Eval](https://github.com/EleutherAI/lm-evaluation-harness).
[orca_mini_v2_13b](https://huggingface.co/psmathur/orca_mini_v2_13b)
| Task |Version| Metric | Value | |Stderr|
|--------|------:|---------------|------:|---|------|
|wikitext| 1|word_perplexity|23.8997| | |
| | |byte_perplexity| 1.8104| | |
| | |bits_per_byte | 0.8563| | |
[orca_mini_v2_13b (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/psmathur-orca_mini_v2_13b-w4-g128-awq)
| Task |Version| Metric | Value | |Stderr|
|--------|------:|---------------|------:|---|------|
|wikitext| 1|word_perplexity|27.4695| | |
| | |byte_perplexity| 1.8581| | |
| | |bits_per_byte | 0.8938| | |
## Acknowledgements
If you found `orca_mini_v2_13b` useful in your research or applications, please kindly cite using the following BibTeX:
```
@misc{orca_mini_v2_13b,
author = {Pankaj Mathur},
title = {orca_mini_v2_13b: An explain tuned LLaMA-13b model on uncensored wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v2_13b},
}
```
```
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
```
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
```
@misc{xu2023wizardlm,
title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
year={2023},
eprint={2304.12244},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper:
```
@article{lin2023awq,
title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
journal={arXiv},
year={2023}
}
```
|