File size: 3,155 Bytes
a042dde
1fb203f
a042dde
 
 
 
 
 
 
 
 
 
 
 
 
 
f6b1b9c
a042dde
8fc3926
 
 
 
 
 
 
 
 
a042dde
 
8fc3926
 
a042dde
 
 
 
 
 
 
 
 
8fc3926
 
 
 
 
a042dde
 
 
 
 
 
 
 
 
 
f6b1b9c
a042dde
 
 
f6b1b9c
a042dde
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: llama3
library_name: peft
base_model: unsloth/llama-3-8b-bnb-4bit
---

# VeriUS LLM 8b v0.2

VeriUS LLM is a generative model that is fine-tuned on Llama-3-8B (Unsloth).


## Model Details
Base Model: unsloth/llama-3-8b-bnb-4bit

Training Dataset: A combined dataset of alpaca, dolly and bactrainx which is translated to turkish.

Training Method: Fine-tuned with Unsloth, QLoRA and ORPO

#TrainingArguments\
PER_DEVICE_BATCH_SIZE: 2\
GRADIENT_ACCUMULATION_STEPS: 4\
WARMUP_RATIO: 0.03\
NUM_EPOCHS: 2\
LR: 0.000008\
OPTIM: "adamw_8bit"\
WEIGHT_DECAY: 0.01\
LR_SCHEDULER_TYPE: "linear"\
BETA: 0.1

#PEFT Arguments\
RANK: 128\
TARGET_MODULES:
  - "q_proj"
  - "k_proj"
  - "v_proj"
  - "o_proj"
  - "gate_proj"
  - "up_proj"
  - "down_proj"

LORA_ALPHA: 256\
LORA_DROPOUT: 0\
BIAS: "none"\
GRADIENT_CHECKPOINT: 'unsloth'\
USE_RSLORA: false\

## Usage
This model is trained used Unsloth and uses it for fast inference. For Unsloth installation please refer to: https://github.com/unslothai/unsloth

This model can also be loaded with AutoModelForCausalLM

How to load with unsloth:
```commandline
from unsloth import FastLanguageModel

max_seq_len = 1024
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="VeriUs/VeriUS-LLM-8b-v0.2",
    max_seq_length=max_seq_len,
    dtype=None
)
FastLanguageModel.for_inference(model)  # Enable native 2x faster inference

prompt_tempate = """Aşağıda, görevini açıklayan bir talimat ve daha fazla bağlam sağlayan bir girdi verilmiştir. İsteği uygun bir şekilde tamamlayan bir yanıt yaz.

### Talimat:
{}

### Girdi:
{}

### Yanıt:
"""


def generate_output(instruction, user_input):
    input_ids = tokenizer(
        [
            prompt_tempate.format(instruction, user_input)
        ], return_tensors="pt").to("cuda")

    outputs = model.generate(**input_ids, max_length=max_seq_len, do_sample=True)

    # removes prompt, comment this out if you want to see it.
    outputs = [output[len(input_ids[i].ids):] for i, output in enumerate(outputs)]

    return tokenizer.decode(outputs[0], skip_special_tokens=True)


response = generate_output("Türkiye'nin en kalabalık şehri hangisidir?", "")
print(response)
```

## Bias, Risks, and Limitations

Limitations and Known Biases
Primary Function and Application: VeriUS LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified.

Language Comprehension and Generation: The base model is primarily trained in standard English. Even though it fine-tuned with and Turkish dataset, its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.

Generation of False Information: Users should be aware that VeriUS LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers.