Initial GGUF model commit
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ inference: false
|
|
5 |
license: llama2
|
6 |
model_creator: Jon Durbin
|
7 |
model_link: https://huggingface.co/jondurbin/airoboros-l2-70b-2.1
|
8 |
-
model_name: Airoboros L2 70B
|
9 |
model_type: llama
|
10 |
quantized_by: TheBloke
|
11 |
---
|
@@ -27,13 +27,13 @@ quantized_by: TheBloke
|
|
27 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
28 |
<!-- header end -->
|
29 |
|
30 |
-
# Airoboros L2 70B - GGUF
|
31 |
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
|
32 |
-
- Original model: [Airoboros L2 70B](https://huggingface.co/jondurbin/airoboros-l2-70b-2.1)
|
33 |
|
34 |
## Description
|
35 |
|
36 |
-
This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 70B](https://huggingface.co/jondurbin/airoboros-l2-70b-2.1).
|
37 |
|
38 |
<!-- README_GGUF.md-about-gguf start -->
|
39 |
### About GGUF
|
@@ -109,53 +109,15 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
109 |
| [airoboros-l2-70b-2.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
|
110 |
| [airoboros-l2-70b-2.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
|
111 |
| [airoboros-l2-70b-2.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
|
112 |
-
| [airoboros-l2-70b-2.1.Q8_0.gguf-split-b](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q8_0.gguf-split-b) | Q8_0 | 8 | 36.
|
113 |
| [airoboros-l2-70b-2.1.Q6_K.gguf-split-a](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q6_K.gguf-split-a) | Q6_K | 6 | 36.70 GB| 39.20 GB | very large, extremely low quality loss |
|
114 |
| [airoboros-l2-70b-2.1.Q8_0.gguf-split-a](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q8_0.gguf-split-a) | Q8_0 | 8 | 36.70 GB| 39.20 GB | very large, extremely low quality loss - not recommended |
|
115 |
| [airoboros-l2-70b-2.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
|
116 |
| [airoboros-l2-70b-2.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
117 |
| [airoboros-l2-70b-2.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
|
118 |
| [airoboros-l2-70b-2.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
|
119 |
-
| airoboros-l2-70b-2.1.Q6_K.gguf | q6_K | 6 | 56.82 GB | 59.32 GB | very large, extremely low quality loss |
|
120 |
-
| airoboros-l2-70b-2.1.Q8_0.gguf | q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
|
121 |
|
122 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
123 |
-
|
124 |
-
### Q6_K and Q8_0 files are split and require joining
|
125 |
-
|
126 |
-
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
|
127 |
-
|
128 |
-
<details>
|
129 |
-
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
|
130 |
-
|
131 |
-
### q6_K
|
132 |
-
Please download:
|
133 |
-
* `airoboros-l2-70b-2.1.Q6_K.gguf-split-a`
|
134 |
-
* `airoboros-l2-70b-2.1.Q6_K.gguf-split-b`
|
135 |
-
|
136 |
-
### q8_0
|
137 |
-
Please download:
|
138 |
-
* `airoboros-l2-70b-2.1.Q8_0.gguf-split-a`
|
139 |
-
* `airoboros-l2-70b-2.1.Q8_0.gguf-split-b`
|
140 |
-
|
141 |
-
To join the files, do the following:
|
142 |
-
|
143 |
-
Linux and macOS:
|
144 |
-
```
|
145 |
-
cat airoboros-l2-70b-2.1.Q6_K.gguf-split-* > airoboros-l2-70b-2.1.Q6_K.gguf && rm airoboros-l2-70b-2.1.Q6_K.gguf-split-*
|
146 |
-
cat airoboros-l2-70b-2.1.Q8_0.gguf-split-* > airoboros-l2-70b-2.1.Q8_0.gguf && rm airoboros-l2-70b-2.1.Q8_0.gguf-split-*
|
147 |
-
```
|
148 |
-
Windows command line:
|
149 |
-
```
|
150 |
-
COPY /B airoboros-l2-70b-2.1.Q6_K.gguf-split-a + airoboros-l2-70b-2.1.Q6_K.gguf-split-b airoboros-l2-70b-2.1.Q6_K.gguf
|
151 |
-
del airoboros-l2-70b-2.1.Q6_K.gguf-split-a airoboros-l2-70b-2.1.Q6_K.gguf-split-b
|
152 |
-
|
153 |
-
COPY /B airoboros-l2-70b-2.1.Q8_0.gguf-split-a + airoboros-l2-70b-2.1.Q8_0.gguf-split-b airoboros-l2-70b-2.1.Q8_0.gguf
|
154 |
-
del airoboros-l2-70b-2.1.Q8_0.gguf-split-a airoboros-l2-70b-2.1.Q8_0.gguf-split-b
|
155 |
-
```
|
156 |
-
|
157 |
-
</details>
|
158 |
-
|
159 |
<!-- README_GGUF.md-provided-files end -->
|
160 |
|
161 |
<!-- README_GGUF.md-how-to-run start -->
|
@@ -216,7 +178,7 @@ And thank you again to a16z for their generous grant.
|
|
216 |
<!-- footer end -->
|
217 |
|
218 |
<!-- original-model-card start -->
|
219 |
-
# Original model card: Jon Durbin's Airoboros L2 70B
|
220 |
|
221 |
|
222 |
### Overview
|
@@ -243,7 +205,7 @@ This is an instruction fine-tuned llama-2 model, using synthetic data generated
|
|
243 |
- laws vary widely based on time and location
|
244 |
- language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal
|
245 |
- these models just produce text, what you do with that text is your resonsibility
|
246 |
-
- many people and industries deal with "sensitive" content; imagine if a court stenographer's
|
247 |
|
248 |
### Prompt format
|
249 |
|
|
|
5 |
license: llama2
|
6 |
model_creator: Jon Durbin
|
7 |
model_link: https://huggingface.co/jondurbin/airoboros-l2-70b-2.1
|
8 |
+
model_name: Airoboros L2 70B 2.1
|
9 |
model_type: llama
|
10 |
quantized_by: TheBloke
|
11 |
---
|
|
|
27 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
28 |
<!-- header end -->
|
29 |
|
30 |
+
# Airoboros L2 70B 2.1 - GGUF
|
31 |
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
|
32 |
+
- Original model: [Airoboros L2 70B 2.1](https://huggingface.co/jondurbin/airoboros-l2-70b-2.1)
|
33 |
|
34 |
## Description
|
35 |
|
36 |
+
This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 70B 2.1](https://huggingface.co/jondurbin/airoboros-l2-70b-2.1).
|
37 |
|
38 |
<!-- README_GGUF.md-about-gguf start -->
|
39 |
### About GGUF
|
|
|
109 |
| [airoboros-l2-70b-2.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
|
110 |
| [airoboros-l2-70b-2.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
|
111 |
| [airoboros-l2-70b-2.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
|
112 |
+
| [airoboros-l2-70b-2.1.Q8_0.gguf-split-b](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q8_0.gguf-split-b) | Q8_0 | 8 | 36.59 GB| 39.09 GB | very large, extremely low quality loss - not recommended |
|
113 |
| [airoboros-l2-70b-2.1.Q6_K.gguf-split-a](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q6_K.gguf-split-a) | Q6_K | 6 | 36.70 GB| 39.20 GB | very large, extremely low quality loss |
|
114 |
| [airoboros-l2-70b-2.1.Q8_0.gguf-split-a](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q8_0.gguf-split-a) | Q8_0 | 8 | 36.70 GB| 39.20 GB | very large, extremely low quality loss - not recommended |
|
115 |
| [airoboros-l2-70b-2.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
|
116 |
| [airoboros-l2-70b-2.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
117 |
| [airoboros-l2-70b-2.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
|
118 |
| [airoboros-l2-70b-2.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-GGUF/blob/main/airoboros-l2-70b-2.1.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
|
|
|
|
|
119 |
|
120 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
121 |
<!-- README_GGUF.md-provided-files end -->
|
122 |
|
123 |
<!-- README_GGUF.md-how-to-run start -->
|
|
|
178 |
<!-- footer end -->
|
179 |
|
180 |
<!-- original-model-card start -->
|
181 |
+
# Original model card: Jon Durbin's Airoboros L2 70B 2.1
|
182 |
|
183 |
|
184 |
### Overview
|
|
|
205 |
- laws vary widely based on time and location
|
206 |
- language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal
|
207 |
- these models just produce text, what you do with that text is your resonsibility
|
208 |
+
- many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless
|
209 |
|
210 |
### Prompt format
|
211 |
|