TheBloke commited on
Commit
93c24cc
1 Parent(s): 1279793

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -103
README.md CHANGED
@@ -44,26 +44,19 @@ quantized_by: TheBloke
44
 
45
  This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
46
 
47
- <!-- description end -->
48
- <!-- README_GGUF.md-about-gguf start -->
49
- ### About GGUF
50
 
51
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
52
 
53
- Here is an incomplete list of clients and libraries that are known to support GGUF:
54
 
55
- * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
56
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
57
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
58
- * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
59
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
60
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
61
- * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
62
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
63
- * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
64
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
65
 
66
- <!-- README_GGUF.md-about-gguf end -->
67
  <!-- repositories-available start -->
68
  ## Repositories available
69
 
@@ -78,7 +71,6 @@ Here is an incomplete list of clients and libraries that are known to support GG
78
 
79
  ```
80
  <s>[INST] {prompt} [/INST]
81
-
82
  ```
83
 
84
  <!-- prompt-template end -->
@@ -87,9 +79,7 @@ Here is an incomplete list of clients and libraries that are known to support GG
87
  <!-- compatibility_gguf start -->
88
  ## Compatibility
89
 
90
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
91
-
92
- They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
93
 
94
  ## Explanation of quantisation methods
95
 
@@ -133,17 +123,6 @@ Refer to the Provided Files table below to see what files use which methods, and
133
 
134
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
135
 
136
- The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
137
-
138
- * LM Studio
139
- * LoLLMS Web UI
140
- * Faraday.dev
141
-
142
- ### In `text-generation-webui`
143
-
144
- Under Download Model, you can enter the model repo: TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF and below it, a specific filename to download, such as: mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf.
145
-
146
- Then click Download.
147
 
148
  ### On the command line, including multiple files at once
149
 
@@ -205,82 +184,12 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
205
 
206
  ## How to run in `text-generation-webui`
207
 
208
- Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
209
 
210
  ## How to run from Python code
211
 
212
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
213
-
214
- ### How to load this model in Python code, using llama-cpp-python
215
-
216
- For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
217
-
218
- #### First install the package
219
-
220
- Run one of the following commands, according to your system:
221
-
222
- ```shell
223
- # Base ctransformers with no GPU acceleration
224
- pip install llama-cpp-python
225
- # With NVidia CUDA acceleration
226
- CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
227
- # Or with OpenBLAS acceleration
228
- CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
229
- # Or with CLBLast acceleration
230
- CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
231
- # Or with AMD ROCm GPU acceleration (Linux only)
232
- CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
233
- # Or with Metal GPU acceleration for macOS systems only
234
- CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
235
-
236
- # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
237
- $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
238
- pip install llama-cpp-python
239
- ```
240
-
241
- #### Simple llama-cpp-python example code
242
-
243
- ```python
244
- from llama_cpp import Llama
245
-
246
- # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
247
- llm = Llama(
248
- model_path="./mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf", # Download the model file first
249
- n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
250
- n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
251
- n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
252
- )
253
-
254
- # Simple inference example
255
- output = llm(
256
- "<s>[INST] {prompt} [/INST]", # Prompt
257
- max_tokens=512, # Generate up to 512 tokens
258
- stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
259
- echo=True # Whether to echo the prompt
260
- )
261
-
262
- # Chat Completion API
263
-
264
- llm = Llama(model_path="./mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
265
- llm.create_chat_completion(
266
- messages = [
267
- {"role": "system", "content": "You are a story writing assistant."},
268
- {
269
- "role": "user",
270
- "content": "Write a story about llamas."
271
- }
272
- ]
273
- )
274
- ```
275
-
276
- ## How to use with LangChain
277
-
278
- Here are guides on using llama-cpp-python and ctransformers with LangChain:
279
-
280
- * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
281
- * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
282
 
283
- <!-- README_GGUF.md-how-to-run end -->
284
 
285
  <!-- footer start -->
286
  <!-- 200823 -->
 
44
 
45
  This repo contains GGUF format model files for [Mistral AI_'s Mixtral 8X7B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
46
 
47
+ ## EXPERIMENTAL - REQUIRES LLAMA.CPP PR
48
+
49
+ These are experimental GGUF files, created using a llama.cpp PR found here: https://github.com/ggerganov/llama.cpp/pull/4406.
50
 
51
+ THEY WILL NOT WORK WITH LLAMA.CPP FROM `main`, OR ANY DOWNSTREAM LLAMA.CPP CLIENT - such as LM Studio, llama-cpp-python, text-generation-webui, etc.
52
 
53
+ To test these GGUFs, please build llama.cpp from the above PR.
54
 
55
+ I have tested CUDA acceleration and it works great. I have not yet tested other forms of GPU acceleration.
56
+
57
+ <!-- description end -->
 
 
 
 
 
 
 
58
 
59
+ <!-- description end -->
60
  <!-- repositories-available start -->
61
  ## Repositories available
62
 
 
71
 
72
  ```
73
  <s>[INST] {prompt} [/INST]
 
74
  ```
75
 
76
  <!-- prompt-template end -->
 
79
  <!-- compatibility_gguf start -->
80
  ## Compatibility
81
 
82
+ PR mentioned above only
 
 
83
 
84
  ## Explanation of quantisation methods
85
 
 
123
 
124
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
125
 
 
 
 
 
 
 
 
 
 
 
 
126
 
127
  ### On the command line, including multiple files at once
128
 
 
184
 
185
  ## How to run in `text-generation-webui`
186
 
187
+ Not currently supported
188
 
189
  ## How to run from Python code
190
 
191
+ Not currently supported
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
192
 
 
193
 
194
  <!-- footer start -->
195
  <!-- 200823 -->