Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
llama.cpp [5921b8f](https://github.com/ggerganov/llama.cpp/commit/5921b8f089d3b7bda86aac5a66825df6a6c10603) revision is used for the conversion.
|
6 |
+
|
7 |
+
This model is a GGUF version of the [tarikkaankoc7/Llama-3-8B-TKK-Elite-V1.0](https://huggingface.co/tarikkaankoc7/Llama-3-8B-TKK-Elite-V1.0), Turkish instruction fine-tuned Llama-3-8b model.
|
8 |
+
|
9 |
+
Currently, only Q8_0 quantization is available.
|