Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ inference: false
|
|
5 |
|
6 |
# Alpaca LoRA 65B GPTQ 4bit
|
7 |
|
8 |
-
This is a [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) [changsung's alpaca-lora-65B](https://huggingface.co/chansung/alpaca-lora-65b)
|
9 |
|
10 |
## These files need a lot of VRAM!
|
11 |
|
|
|
5 |
|
6 |
# Alpaca LoRA 65B GPTQ 4bit
|
7 |
|
8 |
+
This is a [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) 4bit quantisation of [changsung's alpaca-lora-65B](https://huggingface.co/chansung/alpaca-lora-65b)
|
9 |
|
10 |
## These files need a lot of VRAM!
|
11 |
|