File size: 417 Bytes
6cbe496 |
1 2 3 4 5 6 7 8 9 10 |
---
license: apache-2.0
---
llama.cpp [5921b8f](https://github.com/ggerganov/llama.cpp/commit/5921b8f089d3b7bda86aac5a66825df6a6c10603) revision is used for the conversion.
This model is a GGUF version of the [tarikkaankoc7/Llama-3-8B-TKK-Elite-V1.0](https://huggingface.co/tarikkaankoc7/Llama-3-8B-TKK-Elite-V1.0), Turkish instruction fine-tuned Llama-3-8b model.
Currently, only Q8_0 quantization is available.
|