Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ license: apache-2.0
|
|
9 |
|
10 |
# ✨ Falcon-40B-Instruct 8Bit
|
11 |
|
12 |
-
**INFO: This model is the Falcon-40B-Instruct model quantized using bitsandbytes. This saves you around 40 GB of downloads, if you plan to quantize the model anyways.**
|
13 |
|
14 |
**Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). It is made available under the Apache 2.0 license.**
|
15 |
|
|
|
9 |
|
10 |
# ✨ Falcon-40B-Instruct 8Bit
|
11 |
|
12 |
+
**INFO: This model is the Falcon-40B-Instruct model quantized using bitsandbytes. This saves you around 40 GB of downloads, if you plan to quantize the model anyways. bitsandbytes quantization only supports the GPU, so this will only run with a GPU that can hold the full model.**
|
13 |
|
14 |
**Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). It is made available under the Apache 2.0 license.**
|
15 |
|