pipeline_tag: text-generation | |
tags: | |
- llama | |
- 13b | |
- chat | |
- GGUF | |
This is a converted model to GGUF from `NousResearch/Llama-2-13b-chat-hf` quantized to `Q4_K_M` using `llama.cpp` library. |
pipeline_tag: text-generation | |
tags: | |
- llama | |
- 13b | |
- chat | |
- GGUF | |
This is a converted model to GGUF from `NousResearch/Llama-2-13b-chat-hf` quantized to `Q4_K_M` using `llama.cpp` library. |