TheBloke commited on
Commit
ad6e931
β€’
1 Parent(s): 790f988

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -20
README.md CHANGED
@@ -58,7 +58,7 @@ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is
58
 
59
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
60
 
61
- As of August 25th, here is a list of clients and libraries that are known to support GGUF:
62
  * [llama.cpp](https://github.com/ggerganov/llama.cpp).
63
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
64
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
@@ -68,9 +68,7 @@ As of August 25th, here is a list of clients and libraries that are known to sup
68
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
69
  * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
70
 
71
- The clients and libraries below are expecting to add GGUF support shortly:
72
  <!-- README_GGUF.md-about-gguf end -->
73
-
74
  <!-- repositories-available start -->
75
  ## Repositories available
76
 
@@ -99,9 +97,7 @@ Below is an instruction that describes a task. Write a response that appropriate
99
 
100
  These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
101
 
102
- As of August 24th 2023 they are now compatible with KoboldCpp, release 1.41 and later.
103
-
104
- They are are not yet compatible with any other third-party UIS, libraries or utilities but this is expected to change very soon.
105
 
106
  ## Explanation of quantisation methods
107
  <details>
@@ -127,27 +123,32 @@ Refer to the Provided Files table below to see what files use which methods, and
127
  | [wizardcoder-python-13b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
128
  | [wizardcoder-python-13b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
129
  | [wizardcoder-python-13b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
 
130
  | [wizardcoder-python-13b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
131
  | [wizardcoder-python-13b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
 
132
  | [wizardcoder-python-13b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
133
  | [wizardcoder-python-13b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
134
  | [wizardcoder-python-13b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
135
  | [wizardcoder-python-13b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
136
 
137
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
 
 
 
138
  <!-- README_GGUF.md-provided-files end -->
139
 
140
  <!-- README_GGUF.md-how-to-run start -->
141
- ## How to run in `llama.cpp`
142
 
143
  Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
144
 
145
- For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
146
 
147
  ```
148
- ./main -t 10 -ngl 32 -m wizardcoder-python-13b-v1.0.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
149
  ```
150
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
151
 
152
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
153
 
@@ -160,6 +161,44 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
160
  ## How to run in `text-generation-webui`
161
 
162
  Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
163
  <!-- README_GGUF.md-how-to-run end -->
164
 
165
  <!-- footer start -->
@@ -185,7 +224,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
185
 
186
  **Special thanks to**: Aemon Algiz.
187
 
188
- **Patreon special mentions**: Kacper WikieΕ‚, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11
189
 
190
 
191
  Thank you to all my generous patrons and donaters!
@@ -214,11 +253,12 @@ And thank you again to a16z for their generous grant.
214
 
215
 
216
  | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
217
- | ----- |------| ---- |------|-------| ----- | ----- |
218
  | WizardCoder-Python-34B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
219
  | WizardCoder-15B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
220
  | WizardCoder-Python-13B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
221
- | WizardCoder-3B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | [Demo](http://47.103.63.15:50086/) | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
 
222
  | WizardCoder-1B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
223
 
224
 
@@ -226,7 +266,7 @@ And thank you again to a16z for their generous grant.
226
  - Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM, and achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
227
 
228
  <font size=4>
229
-
230
  | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
231
  | ----- |------| ---- |------|-------| ----- | ----- |
232
  | WizardMath-70B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
@@ -235,13 +275,13 @@ And thank you again to a16z for their generous grant.
235
  </font>
236
 
237
 
238
- - [08/09/2023] We released **WizardLM-70B-V1.0** model. Here is [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-70B-V1.0).
239
 
240
  <font size=4>
241
-
242
-
243
  | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
244
- | ----- |------| ---- |------|-------| ----- | ----- | ----- |
245
  | <sup>**WizardLM-70B-V1.0**</sup> | <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>πŸ“ƒ**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
246
  | <sup>WizardLM-13B-V1.2</sup> | <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 </sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
247
  | <sup>WizardLM-13B-V1.1</sup> |<sup> πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 </sup>| <sup>Non-commercial</sup>|
@@ -250,6 +290,14 @@ And thank you again to a16z for their generous grant.
250
  | <sup>WizardLM-7B-V1.0 </sup>| <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> πŸ“ƒ <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 </sup>|<sup> Non-commercial</sup>|
251
  </font>
252
 
 
 
 
 
 
 
 
 
253
  ## Prompt Format
254
  ```
255
  "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
@@ -263,11 +311,11 @@ Note: This script supports `WizardLM/WizardCoder-Python-34B/13B/7B-V1.0`. If you
263
 
264
  ## Citation
265
 
266
- Please cite the repo if you use the data or code in this repo.
267
 
268
  ```
269
  @misc{luo2023wizardcoder,
270
- title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
271
  author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang},
272
  year={2023},
273
  }
 
58
 
59
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
60
 
61
+ Here are a list of clients and libraries that are known to support GGUF:
62
  * [llama.cpp](https://github.com/ggerganov/llama.cpp).
63
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
64
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
 
68
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
69
  * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
70
 
 
71
  <!-- README_GGUF.md-about-gguf end -->
 
72
  <!-- repositories-available start -->
73
  ## Repositories available
74
 
 
97
 
98
  These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
99
 
100
+ They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
 
 
101
 
102
  ## Explanation of quantisation methods
103
  <details>
 
123
  | [wizardcoder-python-13b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
124
  | [wizardcoder-python-13b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
125
  | [wizardcoder-python-13b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
126
+ | [wizardcoder-python-13b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
127
  | [wizardcoder-python-13b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
128
  | [wizardcoder-python-13b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
129
+ | [wizardcoder-python-13b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
130
  | [wizardcoder-python-13b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
131
  | [wizardcoder-python-13b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
132
  | [wizardcoder-python-13b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
133
  | [wizardcoder-python-13b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF/blob/main/wizardcoder-python-13b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
134
 
135
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
136
+
137
+
138
+
139
  <!-- README_GGUF.md-provided-files end -->
140
 
141
  <!-- README_GGUF.md-how-to-run start -->
142
+ ## Example `llama.cpp` command
143
 
144
  Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
145
 
146
+ For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
147
 
148
  ```
149
+ ./main -t 10 -ngl 32 -m wizardcoder-python-13b-v1.0.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nWrite a story about llamas\n\n### Response:"
150
  ```
151
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
152
 
153
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
154
 
 
161
  ## How to run in `text-generation-webui`
162
 
163
  Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
164
+
165
+ ## How to run from Python code
166
+
167
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
168
+
169
+ ### How to load this model from Python using ctransformers
170
+
171
+ #### First install the package
172
+
173
+ ```bash
174
+ # Base ctransformers with no GPU acceleration
175
+ pip install ctransformers>=0.2.24
176
+ # Or with CUDA GPU acceleration
177
+ pip install ctransformers[cuda]>=0.2.24
178
+ # Or with ROCm GPU acceleration
179
+ CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
180
+ # Or with Metal GPU acceleration for macOS systems
181
+ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
182
+ ```
183
+
184
+ #### Simple example code to load one of these GGUF models
185
+
186
+ ```python
187
+ from ctransformers import AutoModelForCausalLM
188
+
189
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
190
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardCoder-Python-13B-V1.0-GGML", model_file="wizardcoder-python-13b-v1.0.q4_K_M.gguf", model_type="llama", gpu_layers=50)
191
+
192
+ print(llm("AI is going to"))
193
+ ```
194
+
195
+ ## How to use with LangChain
196
+
197
+ Here's guides on using llama-cpp-python or ctransformers with LangChain:
198
+
199
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
200
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
201
+
202
  <!-- README_GGUF.md-how-to-run end -->
203
 
204
  <!-- footer start -->
 
224
 
225
  **Special thanks to**: Aemon Algiz.
226
 
227
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper WikieΕ‚, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik BjΓ€reholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
228
 
229
 
230
  Thank you to all my generous patrons and donaters!
 
253
 
254
 
255
  | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
256
+ | ----- |------| ---- |------|-------| ----- | ----- |
257
  | WizardCoder-Python-34B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
258
  | WizardCoder-15B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
259
  | WizardCoder-Python-13B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
260
+ | WizardCoder-Python-7B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
261
+ | WizardCoder-3B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
262
  | WizardCoder-1B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
263
 
264
 
 
266
  - Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM, and achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
267
 
268
  <font size=4>
269
+
270
  | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
271
  | ----- |------| ---- |------|-------| ----- | ----- |
272
  | WizardMath-70B-V1.0 | πŸ€— <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | πŸ“ƒ <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
 
275
  </font>
276
 
277
 
278
+ - [08/09/2023] We released **WizardLM-70B-V1.0** model. Here is [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-70B-V1.0).
279
 
280
  <font size=4>
281
+
282
+
283
  | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
284
+ | ----- |------| ---- |------|-------| ----- | ----- | ----- |
285
  | <sup>**WizardLM-70B-V1.0**</sup> | <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>πŸ“ƒ**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
286
  | <sup>WizardLM-13B-V1.2</sup> | <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 </sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
287
  | <sup>WizardLM-13B-V1.1</sup> |<sup> πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 </sup>| <sup>Non-commercial</sup>|
 
290
  | <sup>WizardLM-7B-V1.0 </sup>| <sup>πŸ€— <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> πŸ“ƒ <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 </sup>|<sup> Non-commercial</sup>|
291
  </font>
292
 
293
+ ## Comparing WizardCoder-Python-34B-V1.0 with Other LLMs.
294
+
295
+ πŸ”₯ The following figure shows that our **WizardCoder-Python-34B-V1.0 attains the second position in this benchmark**, surpassing GPT4 (2023/03/15, 73.2 vs. 67.0), ChatGPT-3.5 (73.2 vs. 72.5) and Claude2 (73.2 vs. 71.2).
296
+
297
+ <p align="center" width="100%">
298
+ <a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/compare_sota.png" alt="WizardCoder" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
299
+ </p>
300
+
301
  ## Prompt Format
302
  ```
303
  "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
 
311
 
312
  ## Citation
313
 
314
+ Please cite the repo if you use the data, method or code in this repo.
315
 
316
  ```
317
  @misc{luo2023wizardcoder,
318
+ title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
319
  author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang},
320
  year={2023},
321
  }