maddes8cht
commited on
Commit
·
a1721be
1
Parent(s):
e053bbc
"Update README.md"
Browse files
README.md
CHANGED
@@ -54,7 +54,7 @@ The core project making use of the ggml library is the [llama.cpp](https://githu
|
|
54 |
|
55 |
There is a bunch of quantized files available. How to choose the best for you:
|
56 |
|
57 |
-
#
|
58 |
|
59 |
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
|
60 |
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
|
@@ -68,6 +68,7 @@ With a Q6_K you should find it really hard to find a quality difference to the o
|
|
68 |
|
69 |
|
70 |
|
|
|
71 |
# Original Model Card:
|
72 |
# Open-Assistant Falcon 40B SFT MIX Model
|
73 |
|
@@ -161,6 +162,7 @@ sft9-stage2:
|
|
161 |
```
|
162 |
|
163 |
***End of original Model File***
|
|
|
164 |
|
165 |
|
166 |
## Please consider to support my work
|
|
|
54 |
|
55 |
There is a bunch of quantized files available. How to choose the best for you:
|
56 |
|
57 |
+
# Legacy quants
|
58 |
|
59 |
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
|
60 |
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
|
|
|
68 |
|
69 |
|
70 |
|
71 |
+
---
|
72 |
# Original Model Card:
|
73 |
# Open-Assistant Falcon 40B SFT MIX Model
|
74 |
|
|
|
162 |
```
|
163 |
|
164 |
***End of original Model File***
|
165 |
+
---
|
166 |
|
167 |
|
168 |
## Please consider to support my work
|