TheBloke commited on
Commit
f2bf68a
1 Parent(s): 16f191e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com
27
 
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GPTQ)
29
  * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GGML)
30
- * [Original unquantised fp16 model in HF format](https://huggingface.co/TheBloke/gorilla-7B-HF)
31
 
32
  ## How to easily download and use this model in text-generation-webui
33
 
 
27
 
28
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GPTQ)
29
  * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GGML)
30
+ * [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/gorilla-7B-fp16)
31
 
32
  ## How to easily download and use this model in text-generation-webui
33