File size: 4,285 Bytes
512d709
 
 
 
2ebb251
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for peft with cenkersisman/TurkishWikipedia-LLM-7b-base

**Library name:** peft

**Base model:** mistralai/Mistral-7B-v0.1

**Model Description:**

This model was fine-tuned on Turkish Wikipedia texts using the peft library with Lora configuration.

**Developed by:** [More Information Needed]

**Funded by:** [Optional]: [More Information Needed]

**Shared by:** [Optional]: [More Information Needed]

**Model type:** Fine-tuned language model

**Language(s) (NLP):** Turkish

**License:** [More Information Needed]

**Finetuned from model:** mistralai/Mistral-7B-v0.1

**Model Sources:**

- **Repository:** [More Information Needed]
- **Paper:** [Optional]: [More Information Needed]
- **Demo:** [Optional]: [To be implemented]

## Uses

**Direct Use**

This model can be used for various NLP tasks, including:

- Text generation
- Machine translation
- Question answering
- Text summarization

**Downstream Use**

[More Information Needed]

## Bias, Risks, and Limitations

- **Bias:** The model may inherit biases from the training data, which is Wikipedia text. Biases could include cultural biases or biases in how information is presented on Wikipedia.
- **Risks:** The model may generate text that is offensive, misleading, or factually incorrect. It is important to be aware of these risks and to use the model responsibly.
- **Limitations:** The model may not perform well on all tasks, and it may not be able to generate text that is creative or original.

## Recommendations

- Users (both direct and downstream) should be aware of the risks, biases and limitations of the model.
- It is important to evaluate the outputs of the model carefully before using them in any application.

## How to Get Started with the Model

The following code snippet demonstrates how to load the fine-tuned model and generate text:

Python

```
from transformers import AutoModelForCausalLM, LlamaTokenizer, pipeline

# Load the model and tokenizer
folder = "cenkersisman/TurkishWikipedia-LLM-7b-base"
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(folder).to(device)
tokenizer = LlamaTokenizer.from_pretrained(folder)

# Create a pipeline for text generation
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map=device, max_new_tokens=128, return_full_text=True, repetition_penalty=1.1)

# Generate text
def generate_output(user_query):
    outputs = pipe(user_query, do_sample=True, temperature=0.1, top_k=10, top_p=0.9)
    return outputs[0]["generated_text"]

# Example usage
user_query = "brezilya'nın nüfus olarak dünyanın en büyük"
output = generate_output(user_query)
print(output)
```

This code will load the fine-tuned model from the "cenkersisman/TurkishWikipedia-LLM-7b-base", create a pipeline for text generation, and then generate text based on the provided user query.

## Training Details

**Training Data**

- 9 million sentences from Turkish Wikipedia.

**Training Procedure**

- **Preprocessing:** The data was preprocessed by tokenizing the text and adding special tokens.
  
- **Training Hyperparameters**
  
  - Training regime: Fine-tuning with Lora configuration
  - Speeds, Sizes, Times: [More Information Needed]

**Evaluation**

- Testing Data, Factors & Metrics: [More Information Needed]
  
- **Results:** [More Information Needed]
  

## Summary

- This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 trained on Turkish Wikipedia text.
- The model can be used for various NLP tasks, including text generation.
- It is important to be aware of the risks, biases, and limitations of the model before using it.

## Environmental Impact

- The environmental impact of training this model can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
  
- Hardware Type: [More Information Needed]
  
- Hours used: [More Information Needed]
  
- Cloud Provider: [More Information Needed]
  
- Compute Region: [More Information Needed]
  
- Carbon Emitted: [More Information Needed]
  

## Technical Specifications

- **Model Architecture and Objective:**
  - The model architecture is based on mistralai/Mistral-7B-v0.1.
  - The objective of the fine-tuning process was to improve the model's