TheBloke commited on
Commit
a7c3812
1 Parent(s): d1d8308

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -57,6 +57,7 @@ GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NV
57
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/fiction.live-Kimiko-V2-70B-GPTQ)
58
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/fiction.live-Kimiko-V2-70B-GGUF)
59
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/fiction.live-Kimiko-V2-70B-GGML)
 
60
  * [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B)
61
 
62
  ## Prompt template: Vicuna
 
57
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/fiction.live-Kimiko-V2-70B-GPTQ)
58
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/fiction.live-Kimiko-V2-70B-GGUF)
59
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/fiction.live-Kimiko-V2-70B-GGML)
60
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/fiction.live-Kimiko-V2-70B-fp16)
61
  * [nRuaif's original LoRA adapter, which can be merged on to the base model.](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B)
62
 
63
  ## Prompt template: Vicuna