Upload README.md
Browse files
README.md
CHANGED
@@ -59,6 +59,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
|
|
59 |
<!-- repositories-available start -->
|
60 |
## Repositories available
|
61 |
|
|
|
62 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-LoRA-Assemble-v2-GPTQ)
|
63 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-LoRA-Assemble-v2-GGUF)
|
64 |
* [oh-yeontaek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/oh-yeontaek/llama-2-70B-LoRA-assemble-v2)
|
|
|
59 |
<!-- repositories-available start -->
|
60 |
## Repositories available
|
61 |
|
62 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-LoRA-Assemble-v2-AWQ)
|
63 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-LoRA-Assemble-v2-GPTQ)
|
64 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-LoRA-Assemble-v2-GGUF)
|
65 |
* [oh-yeontaek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/oh-yeontaek/llama-2-70B-LoRA-assemble-v2)
|