File size: 5,579 Bytes
a337c31
 
 
 
 
 
8b1116b
a337c31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99b073f
a337c31
 
 
 
 
 
 
99b073f
a337c31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
base_model: Trendyol/Trendyol-LLM-7b-base-v1.0
---
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v1.0/resolve/main/trendyol-llm-mistral.jpg"
alt="drawing" width="400"/>
# **Trendyol LLM v1.0 GGUF Versions**
Trendyol LLM v1.0 is a generative model that is based on Mistral 7B model. This is the repository for the gguf version of the base model.

I used [llama.cpp](https://github.com/ggerganov/llama.cpp) to convert the base model to GGUF along with its 4-bit and 16-bit quantized versions. ([colab notebook](https://colab.research.google.com/drive/1ZLIb7qeIJ-XlEVAvfFLBOxWcFvdvJVri?usp=sharing))

## Model Details

**Model Developers** Trendyol

**Variations** base, [chat](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0), and [dpo](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0) variations.

**Input** Models input text only.

**Output** Models generate text only.

**Model Architecture** Trendyol LLM v1.0 is an auto-regressive language model (based on Mistral 7b) that uses an optimized transformer architecture. The base version is fine-tuned on 10 billion tokens with the following trainables by using LoRA:

- **lr**=2e-4
- **lora_rank**=64
- **lora_alpha**=128
- **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj
- **modules_to_save**=embed_tokens,lm_head
- **lora_dropout**=0.05
- **bf16**=True
- **max_seq_length**=1024
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"
alt="drawing" width="600"/>
## Usage

### Set up the llama.cpp

Follow the [instructions in the llama.cpp repository](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage)

### Set up the python bindings ([llama-cpp-python](https://github.com/abetlen/llama-cpp-python)) (optional for Python API)

```bash
pip install llama-cpp-python
```

#### Use with the Python API

```python
from llama_cpp import Llama

llm = Llama(
      model_path=<model-path>,
      # n_gpu_layers=-1, # Uncomment to use GPU acceleration
      # seed=1337, # Uncomment to set a specific seed
      # n_ctx=2048, # Uncomment to increase the context window
)

output = llm(
      "Q: Ders çalışmanın en iyi 5 yolu nedir? A: ", # Prompt
      max_tokens=128, # Generate up to 32 tokens, set to None to generate up to the end of the context window
      stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
      echo=True # Echo the prompt back in the output
) # Generate a completion, can also call create_completion
```

### Use llama.cpp for the inference

- Use the [simplechat](https://github.com/ggerganov/llama.cpp/tree/master/examples/server/public_simplechat#usage)
- Use the [HTTP web server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start)


## Limitations, Risks, Bias, and Ethical Considerations
### Limitations and Known Biases
- **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified.
- **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.
- **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers.
### Risks and Ethical Considerations
- **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment.
- **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies.
- **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks.
### Recommendations for Safe and Ethical Usage
- **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly.
- **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive.
- **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences.