Transformers
llamafile
English
stablelm
jartine commited on
Commit
73f503c
1 Parent(s): 048016b

Add README.md to repo

Browse files
Files changed (1) hide show
  1. README.md +45 -55
README.md CHANGED
@@ -28,40 +28,38 @@ quantized_by: TheBloke
28
  <!-- header start -->
29
  <!-- 200823 -->
30
  <div style="width: auto; margin-left: auto; margin-right: auto">
31
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
32
  </div>
33
  <div style="display: flex; justify-content: space-between; width: 100%;">
34
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
35
- <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
36
  </div>
37
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
38
- <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
39
  </div>
40
  </div>
41
- <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
42
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
43
  <!-- header end -->
44
 
45
- # Rocket 3B - GGUF
46
  - Model creator: [pansophic](https://huggingface.co/pansophic)
47
  - Original model: [Rocket 3B](https://huggingface.co/pansophic/rocket-3B)
48
 
49
  <!-- description start -->
50
  ## Description
51
 
52
- This repo contains GGUF format model files for [pansophic's Rocket 3B](https://huggingface.co/pansophic/rocket-3B).
53
 
54
  These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
55
 
56
- <!-- description end -->
57
- <!-- README_GGUF.md-about-gguf start -->
58
- ### About GGUF
59
 
60
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
61
 
62
- Here is an incomplete list of clients and libraries that are known to support GGUF:
63
 
64
- * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
65
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
66
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
67
  * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
@@ -71,12 +69,12 @@ Here is an incomplete list of clients and libraries that are known to support GG
71
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
72
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
73
 
74
- <!-- README_GGUF.md-about-gguf end -->
75
  <!-- repositories-available start -->
76
  ## Repositories available
77
 
78
- * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/rocket-3B-GPTQ)
79
- * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/rocket-3B-GGUF)
80
  * [pansophic's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/pansophic/rocket-3B)
81
  <!-- repositories-available end -->
82
 
@@ -95,10 +93,10 @@ Here is an incomplete list of clients and libraries that are known to support GG
95
  <!-- prompt-template end -->
96
 
97
 
98
- <!-- compatibility_gguf start -->
99
  ## Compatibility
100
 
101
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
102
 
103
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
104
 
@@ -117,34 +115,34 @@ The new methods available are:
117
 
118
  Refer to the Provided Files table below to see what files use which methods, and how.
119
  </details>
120
- <!-- compatibility_gguf end -->
121
 
122
- <!-- README_GGUF.md-provided-files start -->
123
  ## Provided files
124
 
125
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
126
  | ---- | ---- | ---- | ---- | ---- | ----- |
127
- | [rocket-3b.Q2_K.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q2_K.gguf) | Q2_K | 2 | 1.20 GB| 3.70 GB | smallest, significant quality loss - not recommended for most purposes |
128
- | [rocket-3b.Q3_K_S.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q3_K_S.gguf) | Q3_K_S | 3 | 1.25 GB| 3.75 GB | very small, high quality loss |
129
- | [rocket-3b.Q3_K_M.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q3_K_M.gguf) | Q3_K_M | 3 | 1.39 GB| 3.89 GB | very small, high quality loss |
130
- | [rocket-3b.Q3_K_L.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q3_K_L.gguf) | Q3_K_L | 3 | 1.51 GB| 4.01 GB | small, substantial quality loss |
131
- | [rocket-3b.Q4_0.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q4_0.gguf) | Q4_0 | 4 | 1.61 GB| 4.11 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
132
- | [rocket-3b.Q4_K_S.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q4_K_S.gguf) | Q4_K_S | 4 | 1.62 GB| 4.12 GB | small, greater quality loss |
133
- | [rocket-3b.Q4_K_M.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q4_K_M.gguf) | Q4_K_M | 4 | 1.71 GB| 4.21 GB | medium, balanced quality - recommended |
134
- | [rocket-3b.Q5_0.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q5_0.gguf) | Q5_0 | 5 | 1.94 GB| 4.44 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
135
- | [rocket-3b.Q5_K_S.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q5_K_S.gguf) | Q5_K_S | 5 | 1.94 GB| 4.44 GB | large, low quality loss - recommended |
136
- | [rocket-3b.Q5_K_M.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q5_K_M.gguf) | Q5_K_M | 5 | 1.99 GB| 4.49 GB | large, very low quality loss - recommended |
137
- | [rocket-3b.Q6_K.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q6_K.gguf) | Q6_K | 6 | 2.30 GB| 4.80 GB | very large, extremely low quality loss |
138
- | [rocket-3b.Q8_0.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q8_0.gguf) | Q8_0 | 8 | 2.97 GB| 5.47 GB | very large, extremely low quality loss - not recommended |
139
 
140
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
141
 
142
 
143
 
144
- <!-- README_GGUF.md-provided-files end -->
145
 
146
- <!-- README_GGUF.md-how-to-download start -->
147
- ## How to download GGUF files
148
 
149
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
150
 
@@ -156,7 +154,7 @@ The following clients/libraries will automatically download models for you, prov
156
 
157
  ### In `text-generation-webui`
158
 
159
- Under Download Model, you can enter the model repo: TheBloke/rocket-3B-GGUF and below it, a specific filename to download, such as: rocket-3b.Q4_K_M.gguf.
160
 
161
  Then click Download.
162
 
@@ -171,7 +169,7 @@ pip3 install huggingface-hub
171
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
172
 
173
  ```shell
174
- huggingface-cli download TheBloke/rocket-3B-GGUF rocket-3b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
175
  ```
176
 
177
  <details>
@@ -180,7 +178,7 @@ huggingface-cli download TheBloke/rocket-3B-GGUF rocket-3b.Q4_K_M.gguf --local-d
180
  You can also download multiple files at once with a pattern:
181
 
182
  ```shell
183
- huggingface-cli download TheBloke/rocket-3B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
184
  ```
185
 
186
  For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
@@ -194,25 +192,25 @@ pip3 install hf_transfer
194
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
195
 
196
  ```shell
197
- HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/rocket-3B-GGUF rocket-3b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
198
  ```
199
 
200
  Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
201
  </details>
202
- <!-- README_GGUF.md-how-to-download end -->
203
 
204
- <!-- README_GGUF.md-how-to-run start -->
205
  ## Example `llama.cpp` command
206
 
207
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
208
 
209
  ```shell
210
- ./main -ngl 32 -m rocket-3b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
211
  ```
212
 
213
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
214
 
215
- Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
216
 
217
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
218
 
@@ -224,7 +222,7 @@ Further instructions can be found in the text-generation-webui documentation, he
224
 
225
  ## How to run from Python code
226
 
227
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
228
 
229
  ### How to load this model in Python code, using ctransformers
230
 
@@ -249,7 +247,7 @@ CT_METAL=1 pip install ctransformers --no-binary ctransformers
249
  from ctransformers import AutoModelForCausalLM
250
 
251
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
252
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/rocket-3B-GGUF", model_file="rocket-3b.Q4_K_M.gguf", model_type="stablelm", gpu_layers=50)
253
 
254
  print(llm("AI is going to"))
255
  ```
@@ -261,7 +259,7 @@ Here are guides on using llama-cpp-python and ctransformers with LangChain:
261
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
262
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
263
 
264
- <!-- README_GGUF.md-how-to-run end -->
265
 
266
  <!-- footer start -->
267
  <!-- 200823 -->
@@ -269,31 +267,23 @@ Here are guides on using llama-cpp-python and ctransformers with LangChain:
269
 
270
  For further support, and discussions on these models and AI in general, join us at:
271
 
272
- [TheBloke AI's Discord server](https://discord.gg/theblokeai)
273
 
274
  ## Thanks, and how to contribute
275
 
276
- Thanks to the [chirper.ai](https://chirper.ai) team!
277
 
278
- Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
279
 
280
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
281
 
282
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
283
 
284
- Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
285
 
286
- * Patreon: https://patreon.com/TheBlokeAI
287
- * Ko-Fi: https://ko-fi.com/TheBlokeAI
288
 
289
- **Special thanks to**: Aemon Algiz.
290
 
291
- **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
292
 
293
 
294
- Thank you to all my generous patrons and donaters!
295
 
296
- And thank you again to a16z for their generous grant.
297
 
298
  <!-- footer end -->
299
 
 
28
  <!-- header start -->
29
  <!-- 200823 -->
30
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
31
  </div>
32
  <div style="display: flex; justify-content: space-between; width: 100%;">
33
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
34
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/FwAVVu7eJ4">Chat & support: jartine's Discord server</a></p>
35
  </div>
36
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
 
37
  </div>
38
  </div>
39
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">jartine's LLM work is generously supported by a grant from <a href="https://mozilla.org">mozilla</a></p></div>
40
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
41
  <!-- header end -->
42
 
43
+ # Rocket 3B - llamafile
44
  - Model creator: [pansophic](https://huggingface.co/pansophic)
45
  - Original model: [Rocket 3B](https://huggingface.co/pansophic/rocket-3B)
46
 
47
  <!-- description start -->
48
  ## Description
49
 
50
+ This repo contains llamafile format model files for [pansophic's Rocket 3B](https://huggingface.co/pansophic/rocket-3B).
51
 
52
  These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
53
 
54
+ WARNING: This README may contain inaccuracies. It was generated automatically by forking <a href=/TheBloke/rocket-3B-GGUF>TheBloke/rocket-3B-GGUF</a> and piping the README through sed. Errors should be reported to jartine, and do not reflect TheBloke. You can also support his work on [Patreon](https://www.patreon.com/TheBlokeAI).
55
+ <!-- README_llamafile.md-about-llamafile start -->
56
+ ### About llamafile
57
 
58
+ llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp binaries that run on the stock installs of six OSes for both ARM64 and AMD64.
59
 
60
+ Here is an incomplete list of clients and libraries that are known to support llamafile:
61
 
62
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for llamafile. Offers a CLI and a server option.
63
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
64
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
65
  * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
 
69
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
70
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
71
 
72
+ <!-- README_llamafile.md-about-llamafile end -->
73
  <!-- repositories-available start -->
74
  ## Repositories available
75
 
76
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/jartine/rocket-3B-GPTQ)
77
+ * [2, 3, 4, 5, 6 and 8-bit llamafile models for CPU+GPU inference](https://huggingface.co/jartine/rocket-3B-llamafile)
78
  * [pansophic's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/pansophic/rocket-3B)
79
  <!-- repositories-available end -->
80
 
 
93
  <!-- prompt-template end -->
94
 
95
 
96
+ <!-- compatibility_llamafile start -->
97
  ## Compatibility
98
 
99
+ These quantised llamafilev2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
100
 
101
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
102
 
 
115
 
116
  Refer to the Provided Files table below to see what files use which methods, and how.
117
  </details>
118
+ <!-- compatibility_llamafile end -->
119
 
120
+ <!-- README_llamafile.md-provided-files start -->
121
  ## Provided files
122
 
123
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
124
  | ---- | ---- | ---- | ---- | ---- | ----- |
125
+ | [rocket-3b.Q2_K.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q2_K.llamafile) | Q2_K | 2 | 1.20 GB| 3.70 GB | smallest, significant quality loss - not recommended for most purposes |
126
+ | [rocket-3b.Q3_K_S.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q3_K_S.llamafile) | Q3_K_S | 3 | 1.25 GB| 3.75 GB | very small, high quality loss |
127
+ | [rocket-3b.Q3_K_M.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q3_K_M.llamafile) | Q3_K_M | 3 | 1.39 GB| 3.89 GB | very small, high quality loss |
128
+ | [rocket-3b.Q3_K_L.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q3_K_L.llamafile) | Q3_K_L | 3 | 1.51 GB| 4.01 GB | small, substantial quality loss |
129
+ | [rocket-3b.Q4_0.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q4_0.llamafile) | Q4_0 | 4 | 1.61 GB| 4.11 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
130
+ | [rocket-3b.Q4_K_S.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q4_K_S.llamafile) | Q4_K_S | 4 | 1.62 GB| 4.12 GB | small, greater quality loss |
131
+ | [rocket-3b.Q4_K_M.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q4_K_M.llamafile) | Q4_K_M | 4 | 1.71 GB| 4.21 GB | medium, balanced quality - recommended |
132
+ | [rocket-3b.Q5_0.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q5_0.llamafile) | Q5_0 | 5 | 1.94 GB| 4.44 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
133
+ | [rocket-3b.Q5_K_S.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q5_K_S.llamafile) | Q5_K_S | 5 | 1.94 GB| 4.44 GB | large, low quality loss - recommended |
134
+ | [rocket-3b.Q5_K_M.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q5_K_M.llamafile) | Q5_K_M | 5 | 1.99 GB| 4.49 GB | large, very low quality loss - recommended |
135
+ | [rocket-3b.Q6_K.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q6_K.llamafile) | Q6_K | 6 | 2.30 GB| 4.80 GB | very large, extremely low quality loss |
136
+ | [rocket-3b.Q8_0.llamafile](https://huggingface.co/jartine/rocket-3B-llamafile/blob/main/rocket-3b.Q8_0.llamafile) | Q8_0 | 8 | 2.97 GB| 5.47 GB | very large, extremely low quality loss - not recommended |
137
 
138
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
139
 
140
 
141
 
142
+ <!-- README_llamafile.md-provided-files end -->
143
 
144
+ <!-- README_llamafile.md-how-to-download start -->
145
+ ## How to download llamafile files
146
 
147
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
148
 
 
154
 
155
  ### In `text-generation-webui`
156
 
157
+ Under Download Model, you can enter the model repo: jartine/rocket-3B-llamafile and below it, a specific filename to download, such as: rocket-3b.Q4_K_M.llamafile.
158
 
159
  Then click Download.
160
 
 
169
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
170
 
171
  ```shell
172
+ huggingface-cli download jartine/rocket-3B-llamafile rocket-3b.Q4_K_M.llamafile --local-dir . --local-dir-use-symlinks False
173
  ```
174
 
175
  <details>
 
178
  You can also download multiple files at once with a pattern:
179
 
180
  ```shell
181
+ huggingface-cli download jartine/rocket-3B-llamafile --local-dir . --local-dir-use-symlinks False --include='*Q4_K*llamafile'
182
  ```
183
 
184
  For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
 
192
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
193
 
194
  ```shell
195
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download jartine/rocket-3B-llamafile rocket-3b.Q4_K_M.llamafile --local-dir . --local-dir-use-symlinks False
196
  ```
197
 
198
  Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
199
  </details>
200
+ <!-- README_llamafile.md-how-to-download end -->
201
 
202
+ <!-- README_llamafile.md-how-to-run start -->
203
  ## Example `llama.cpp` command
204
 
205
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
206
 
207
  ```shell
208
+ ./main -ngl 32 -m rocket-3b.Q4_K_M.llamafile --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
209
  ```
210
 
211
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
212
 
213
+ Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the llamafile file and set by llama.cpp automatically.
214
 
215
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
216
 
 
222
 
223
  ## How to run from Python code
224
 
225
+ You can use llamafile models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
226
 
227
  ### How to load this model in Python code, using ctransformers
228
 
 
247
  from ctransformers import AutoModelForCausalLM
248
 
249
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
250
+ llm = AutoModelForCausalLM.from_pretrained("jartine/rocket-3B-llamafile", model_file="rocket-3b.Q4_K_M.llamafile", model_type="stablelm", gpu_layers=50)
251
 
252
  print(llm("AI is going to"))
253
  ```
 
259
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
260
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
261
 
262
+ <!-- README_llamafile.md-how-to-run end -->
263
 
264
  <!-- footer start -->
265
  <!-- 200823 -->
 
267
 
268
  For further support, and discussions on these models and AI in general, join us at:
269
 
270
+ [jartine AI's Discord server](https://discord.gg/FwAVVu7eJ4)
271
 
272
  ## Thanks, and how to contribute
273
 
 
274
 
 
275
 
276
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
277
 
278
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
279
 
 
280
 
 
 
281
 
 
282
 
 
283
 
284
 
 
285
 
286
+ And thank you again to mozilla for their generous grant.
287
 
288
  <!-- footer end -->
289