File size: 2,221 Bytes
60f8736
 
 
 
 
 
 
 
dd19d60
d413aa6
60f8736
 
 
 
 
 
 
 
 
 
f4f2991
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
base_model: Trendyol/Trendyol-LLM-7b-base-v0.1
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
model_type: llama
library_name: transformers
inference: false
---
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v0.1/resolve/main/llama-tr-image.jpeg"
alt="drawing" width="400"/>
## Trendyol LLM 7b base v0.1
- **Model creator:** [Trendyol](https://huggingface.co/Trendyol)
- **Original model:** [Trendyol-LLM-7b-base-v0.1](https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v0.1)

<!-- description start -->
## Description
This repo contains GGUF format model files for [Trendyol's Trendyol LLM 7b base v0.1](https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v0.1)
<!-- description end -->

# Quantization methods
| quantization method | bits | size     | use case                                            | recommended |
|---------------------|------|----------|-----------------------------------------------------|-------------|
| Q2_K                | 2    | 2.59 GB  | smallest, significant quality loss - not recommended for most purposes | ❌         |
| Q3_K_S              | 3    | 3.01 GB  | very small, high quality loss                       | ❌         |
| Q3_K_M              | 3    | 3.36 GB  | very small, high quality loss                       | ❌         |
| Q3_K_L              | 3    | 3.66 GB  | small, substantial quality loss                     | ❌         |
| Q4_0                | 4    | 3.9 GB  | legacy; small, very high quality loss - prefer using Q3_K_M | ❌         |
| Q4_K_M              | 4    | 4.15 GB  | medium, balanced quality - recommended              | ✅         |
| Q5_0                | 5    | 4.73 GB  | legacy; medium, balanced quality - prefer using Q4_K_M | ❌         |
| Q5_K_S              | 5    | 4.73 GB  | large, low quality loss - recommended               | ✅         |
| Q5_K_M              | 5    | 4.86 GB  | large, very low quality loss - recommended          | ✅         |
| Q6_K                | 6    | 5.61 GB  | very large, extremely low quality loss              | ❌         |
| Q8_0                | 8    | 13.7 GB  | very large, extremely low quality loss - not recommended | ❌         |