!!! Archive of LLaMa-1-13B Model !!!

May 27, 2023 - Monero/Manticore-13b-Chat-Pyg-Guanaco

v000000

This model was converted to GGUF format from Monero/Manticore-13b-Chat-Pyg-Guanaco using llama.cpp Refer to the original model card for more details on the model.

  • [Quants in repo:] static Q5_K_M, static Q6_K, static Q8_0

Manticore-13b-Chat-Pyg with the Guanaco 13b qLoRa from TimDettmers applied

Downloads last month
2
GGUF
Model size
13B params
Architecture
llama

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for v000000/Manticore-13b-Chat-Pyg-Guanaco-GGUFs

Quantized
(1)
this model