mav23 commited on
Commit
b0b1e70
1 Parent(s): e2f749b

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +398 -0
  3. bielik-11b-v2.0-instruct.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ bielik-11b-v2.0-instruct.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: speakleash/Bielik-11B-v2
4
+ language:
5
+ - pl
6
+ library_name: transformers
7
+ tags:
8
+ - finetuned
9
+ inference:
10
+ parameters:
11
+ temperature: 0.2
12
+ widget:
13
+ - messages:
14
+ - role: user
15
+ content: Co przedstawia polskie godło?
16
+ extra_gated_description: If you want to learn more about how you can use the model, please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>.
17
+ ---
18
+
19
+ <p align="center">
20
+ <img src="https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct/raw/main/speakleash_cyfronet.png">
21
+ </p>
22
+
23
+ # Bielik-11B-v2.0-Instruct
24
+
25
+ Bielik-11B-v2.0-Instruct is a generative text model featuring 11 billion parameters.
26
+ It is an instruct fine-tuned version of the [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2).
27
+ Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH.
28
+ Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure,
29
+ specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH.
30
+ The creation and training of the Bielik-11B-v2.0-Instruct was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer,
31
+ enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes.
32
+ As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision.
33
+
34
+ 🗣️ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/
35
+
36
+ <span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality.
37
+
38
+ ## Model
39
+
40
+ The [SpeakLeash](https://speakleash.org/) team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, synthetic instructions were generated with [Mixtral 8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) and used in training. The dataset used for training comprised over 16 million instructions, consisting of more than 8 billion tokens. The instructions varied in quality, leading to a deterioration in the model’s performance. To counteract this while still allowing ourselves to utilize the aforementioned datasets, several improvements were introduced:
41
+ * Weighted tokens level loss - a strategy inspired by [offline reinforcement learning](https://arxiv.org/abs/2005.01643) and [C-RLFT](https://arxiv.org/abs/2309.11235)
42
+ * Adaptive learning rate inspired by the study on [Learning Rates as a Function of Batch Size](https://arxiv.org/abs/2006.09092)
43
+ * Masked prompt tokens
44
+
45
+
46
+ Bielik-11B-v2.0-Instruct has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo) implemented by [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/). This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way.
47
+
48
+
49
+ ### Model description:
50
+
51
+ * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/)
52
+ * **Language:** Polish
53
+ * **Model type:** causal decoder-only
54
+ * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2)
55
+ * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/)
56
+ * **Model ref:** speakleash:16d24fc7821149765826d22f335eee5f
57
+
58
+
59
+ ### Quantized models:
60
+ We know that some people want to explore smaller models or don't have the resources to run a full model. Therefore, we have prepared quantized versions of the Bielik-11B-v2.0-Instruct model in separate repositories:
61
+ - [GGUF - Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct-GGUF)
62
+ - [GPTQ - 4bit](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct-GPTQ)
63
+ - [FP8](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct-FP8) (vLLM, SGLang - Ada Lovelace, Hopper optimized)
64
+ - [GGUF - experimental - IQ imatrix IQ1_M, IQ2_XXS, IQ3_XXS, IQ4_XS and calibrated Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct-GGUF-IQ-Imatrix)
65
+
66
+ Please note that quantized models may offer lower quality of generated answers compared to full sized variatns.
67
+
68
+
69
+ ### Chat template
70
+
71
+ Bielik-11B-v2.0-Instruct uses [ChatML](https://github.com/cognitivecomputations/OpenChatML) as the prompt format.
72
+
73
+ E.g.
74
+ ```
75
+ prompt = "<s><|im_start|> user\nJakie mamy pory roku?<|im_end|> \n<|im_start|> assistant\n"
76
+ completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|> \n"
77
+ ```
78
+
79
+ This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
80
+
81
+ ```python
82
+ import torch
83
+ from transformers import AutoModelForCausalLM, AutoTokenizer
84
+
85
+ device = "cuda" # the device to load the model onto
86
+
87
+ model_name = "speakleash/Bielik-11B-v2.0-Instruct"
88
+
89
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
90
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
91
+
92
+ messages = [
93
+ {"role": "system", "content": "Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim."},
94
+ {"role": "user", "content": "Jakie mamy pory roku w Polsce?"},
95
+ {"role": "assistant", "content": "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima."},
96
+ {"role": "user", "content": "Która jest najcieplejsza?"}
97
+ ]
98
+
99
+ input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
100
+
101
+ model_inputs = input_ids.to(device)
102
+ model.to(device)
103
+
104
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
105
+ decoded = tokenizer.batch_decode(generated_ids)
106
+ print(decoded[0])
107
+ ```
108
+
109
+ Fully formated input conversation by apply_chat_template from previous example:
110
+
111
+ ```
112
+ <s><|im_start|> system
113
+ Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim.<|im_end|>
114
+ <|im_start|> user
115
+ Jakie mamy pory roku w Polsce?<|im_end|>
116
+ <|im_start|> assistant
117
+ W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|>
118
+ <|im_start|> user
119
+ Która jest najcieplejsza?<|im_end|>
120
+ ```
121
+
122
+
123
+ ## Evaluation
124
+
125
+ Bielik-11B-v2.0-Instruct has been evaluated on several benchmarks to assess its performance across various tasks and languages. These benchmarks include:
126
+
127
+ 1. Open PL LLM Leaderboard
128
+ 2. Open LLM Leaderboard
129
+ 3. Polish MT-Bench
130
+ 4. Polish EQ-Bench (Emotional Intelligence Benchmark)
131
+ 5. MixEval
132
+
133
+ The following sections provide detailed results for each of these benchmarks, demonstrating the model's capabilities in both Polish and English language tasks.
134
+
135
+ ### Open PL LLM Leaderboard
136
+
137
+ Models have been evaluated on [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) 5-shot. The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Average column is an average score among all tasks normalized by baseline scores.
138
+
139
+
140
+ | Model | Parameters (B)| Average |
141
+ |---------------------------------|------------|---------|
142
+ | Meta-Llama-3.1-405B-Instruct-FP8,API | 405 | 69.44 |
143
+ | Mistral-Large-Instruct-2407 | 123 | 69.11 |
144
+ | Qwen2-72B-Instruct | 72 | 65.87 |
145
+ | Bielik-11B-v2.2-Instruct | 11 | 65.57 |
146
+ | Meta-Llama-3.1-70B-Instruct | 70 | 65.49 |
147
+ | Bielik-11B-v2.1-Instruct | 11 | 65.45 |
148
+ | Mixtral-8x22B-Instruct-v0.1 | 141 | 65.23 |
149
+ | **Bielik-11B-v2.0-Instruct** | **11** | **64.98** |
150
+ | Meta-Llama-3-70B-Instruct | 70 | 64.45 |
151
+ | Athene-70B | 70 | 63.65 |
152
+ | WizardLM-2-8x22B | 141 | 62.35 |
153
+ | Qwen1.5-72B-Chat | 72 | 58.67 |
154
+ | Qwen2-57B-A14B-Instruct | 57 | 56.89 |
155
+ | glm-4-9b-chat | 9 | 56.61 |
156
+ | aya-23-35B | 35 | 56.37 |
157
+ | Phi-3.5-MoE-instruct | 41.9 | 56.34 |
158
+ | openchat-3.5-0106-gemma | 7 | 55.69 |
159
+ | Mistral-Nemo-Instruct-2407 | 12 | 55.27 |
160
+ | SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.24 |
161
+ | Mixtral-8x7B-Instruct-v0.1 | 46.7 | 55.07 |
162
+ | Bielik-7B-Instruct-v0.1 | 7 | 44.70 |
163
+ | trurl-2-13b-academic | 13 | 36.28 |
164
+ | trurl-2-7b | 7 | 26.93 |
165
+
166
+ The results from the Open PL LLM Leaderboard demonstrate the exceptional performance of Bielik-11B-v2.0-Instruct:
167
+
168
+ 1. Superior performance in its class: Bielik-11B-v2.0-Instruct outperforms all other models with less than 70B parameters. This is a significant achievement, showcasing its efficiency and effectiveness despite having fewer parameters than many competitors.
169
+
170
+ 2. Competitive with larger models: with a score of 64.98, Bielik-11B-v2.0-Instruct performs on par with models in the 70B parameter range. This indicates that it achieves comparable results to much larger models, demonstrating its advanced architecture and training methodology.
171
+
172
+ 3. Substantial improvement over previous version: the model shows a marked improvement over its predecessor, Bielik-7B-Instruct-v0.1, which scored 43.64. This leap in performance highlights the successful enhancements and optimizations implemented in this newer version.
173
+
174
+ 4. Leading position for Polish language models: in the context of Polish language models, Bielik-11B-v2 Instruct stands out as a leader. There are no other competitive models specifically tailored for the Polish language that match its performance, making it a crucial resource for Polish NLP tasks.
175
+
176
+ These results underscore Bielik-11B-v2.0-Instruct's position as a state-of-the-art model for Polish language processing, offering high performance with relatively modest computational requirements.
177
+
178
+ #### Open PL LLM Leaderboard - Generative Tasks Performance
179
+
180
+ This section presents a focused comparison of generative Polish language task performance between Bielik models and GPT-3.5. The evaluation is limited to generative tasks due to the constraints of assessing OpenAI models. The comprehensive nature and associated costs of the benchmark explain the limited number of models evaluated.
181
+
182
+ | Model | Parameters (B) | Average g |
183
+ |-------------------------------|----------------|---------------|
184
+ | Bielik-11B-v2.1-Instruct | 11 | 66.58 |
185
+ | Bielik-11B-v2.2-Instruct | 11 | 66.11 |
186
+ | **Bielik-11B-v2.0-Instruct** | 11 | **65.58** |
187
+ | gpt-3.5-turbo-instruct | Unknown | 55.65 |
188
+
189
+ The performance variation among Bielik versions is minimal, indicating consistent quality across iterations. Bielik-11B-v2.1-Instruct demonstrates an impressive 17.8% performance advantage over GPT-3.5.
190
+
191
+
192
+ ### Open LLM Leaderboard
193
+
194
+ The Open LLM Leaderboard evaluates models on various English language tasks, providing insights into the model's performance across different linguistic challenges.
195
+
196
+ | Model | AVG | arc_challenge | hellaswag | truthfulqa_mc2 | mmlu | winogrande | gsm8k |
197
+ |--------------------------|-------|---------------|-----------|----------------|-------|------------|-------|
198
+ | Bielik-11B-v2.2-Instruct | 69.86 | 59.90 | 80.16 | 58.34 | 64.34 | 75.30 | 81.12 |
199
+ | Bielik-11B-v2.1-Instruct | 69.82 | 59.56 | 80.20 | 59.35 | 64.18 | 75.06 | 80.59 |
200
+ | **Bielik-11B-v2.0-Instruct** | **68.04** | 58.62 | 78.65 | 54.65 | 63.71 | 76.32 | 76.27 |
201
+ | Bielik-11B-v2 | 65.87 | 60.58 | 79.84 | 46.13 | 63.06 | 77.82 | 67.78 |
202
+ | Mistral-7B-Instruct-v0.2 | 65.71 | 63.14 | 84.88 | 68.26 | 60.78 | 77.19 | 40.03 |
203
+ | Bielik-7B-Instruct-v0.1 | 51.26 | 47.53 | 68.91 | 49.47 | 46.18 | 65.51 | 29.95 |
204
+
205
+
206
+
207
+ Bielik-11B-v2.0-Instruct shows impressive performance on English language tasks:
208
+
209
+ 1. Improvement over its base model (2-point increase).
210
+ 2. Substantial 16-point improvement over Bielik-7B-Instruct-v0.1.
211
+
212
+ These results demonstrate Bielik-11B-v2.0-Instruct's versatility in both Polish and English, highlighting the effectiveness of its instruction tuning process.
213
+
214
+ ### Polish MT-Bench
215
+ The Bielik-11B-v2.0-Instruct (16 bit) model was also evaluated using the MT-Bench benchmark. The quality of the model was evaluated using the English version (original version without modifications) and the Polish version created by Speakleash (tasks and evaluation in Polish, the content of the tasks was also changed to take into account the context of the Polish language).
216
+
217
+ #### MT-Bench English
218
+ | Model | Score |
219
+ |-----------------|----------|
220
+ | Bielik-11B-v2.1 | 8.537500 |
221
+ | Bielik-11B-v2.2 | 8.390625 |
222
+ | **Bielik-11B-v2.0** | **8.159375** |
223
+
224
+ #### MT-Bench Polish
225
+ | Model | Parameters (B) | Score |
226
+ |-------------------------------------|----------------|----------|
227
+ | Qwen2-72B-Instruct | 72 | 8.775000 |
228
+ | Mistral-Large-Instruct-2407 (123B) | 123 | 8.662500 |
229
+ | gemma-2-27b-it | 27 | 8.618750 |
230
+ | Mixtral-8x22b | 141 | 8.231250 |
231
+ | Meta-Llama-3.1-405B-Instruct | 405 | 8.168750 |
232
+ | Meta-Llama-3.1-70B-Instruct | 70 | 8.150000 |
233
+ | Bielik-11B-v2.2-Instruct | 11 | 8.115625 |
234
+ | Bielik-11B-v2.1-Instruct | 11 | 7.996875 |
235
+ | gpt-3.5-turbo | Unknown | 7.868750 |
236
+ | Mixtral-8x7b | 46.7 | 7.637500 |
237
+ | **Bielik-11B-v2.0-Instruct** | **11** | **7.562500** |
238
+ | Mistral-Nemo-Instruct-2407 | 12 | 7.368750 |
239
+ | openchat-3.5-0106-gemma | 7 | 6.812500 |
240
+ | Mistral-7B-Instruct-v0.2 | 7 | 6.556250 |
241
+ | Meta-Llama-3.1-8B-Instruct | 8 | 6.556250 |
242
+ | Bielik-7B-Instruct-v0.1 | 7 | 6.081250 |
243
+ | Mistral-7B-Instruct-v0.3 | 7 | 5.818750 |
244
+ | Polka-Mistral-7B-SFT | 7 | 4.518750 |
245
+ | trurl-2-7b | 7 | 2.762500 |
246
+
247
+ For more information - answers to test tasks and values in each category, visit the [MT-Bench PL](https://huggingface.co/spaces/speakleash/mt-bench-pl) website.
248
+
249
+ ### Polish EQ-Bench
250
+
251
+ [Polish Emotional Intelligence Benchmark for LLMs](https://huggingface.co/spaces/speakleash/polish_eq-bench)
252
+
253
+ | Model | Parameters (B) | Score |
254
+ |-------------------------------|--------|-------|
255
+ | Mistral-Large-Instruct-2407 | 123 | 78.07 |
256
+ | Meta-Llama-3.1-405B-Instruct-FP8 | 405 | 77.23 |
257
+ | gpt-4o-2024-08-06 | ? | 75.15 |
258
+ | gpt-4-turbo-2024-04-09 | ? | 74.59 |
259
+ | Meta-Llama-3.1-70B-Instruct | 70 | 72.53 |
260
+ | Qwen2-72B-Instruct | 72 | 71.23 |
261
+ | Meta-Llama-3-70B-Instruct | 70 | 71.21 |
262
+ | gpt-4o-mini-2024-07-18 | ? | 71.15 |
263
+ | WizardLM-2-8x22B | 141 | 69.56 |
264
+ | Bielik-11B-v2.2-Instruct | 11 | 69.05 |
265
+ | **Bielik-11B-v2.0-Instruct** | **11** | **68.24** |
266
+ | Qwen1.5-72B-Chat | 72 | 68.03 |
267
+ | Mixtral-8x22B-Instruct-v0.1 | 141 | 67.63 |
268
+ | Bielik-11B-v2.1-Instruct | 11 | 60.07 |
269
+ | Qwen1.5-32B-Chat | 32 | 59.63 |
270
+ | openchat-3.5-0106-gemma | 7 | 59.58 |
271
+ | aya-23-35B | 35 | 58.41 |
272
+ | gpt-3.5-turbo | ? | 57.7 |
273
+ | Qwen2-57B-A14B-Instruct | 57 | 57.64 |
274
+ | Mixtral-8x7B-Instruct-v0.1 | 47 | 57.61 |
275
+ | SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.21 |
276
+ | Mistral-7B-Instruct-v0.2 | 7 | 47.02 |
277
+
278
+
279
+ ### MixEval
280
+
281
+ MixEval is a ground-truth-based English benchmark designed to evaluate Large Language Models (LLMs) efficiently and effectively. Key features of MixEval include:
282
+
283
+ 1. Derived from off-the-shelf benchmark mixtures
284
+ 2. Highly capable model ranking with a 0.96 correlation to Chatbot Arena
285
+ 3. Local and quick execution, requiring only 6% of the time and cost compared to running MMLU
286
+
287
+ This benchmark provides a robust and time-efficient method for assessing LLM performance, making it a valuable tool for ongoing model evaluation and comparison.
288
+
289
+ | Model | MixEval | MixEval-Hard |
290
+ |-------------------------------|---------|--------------|
291
+ | Bielik-11B-v2.1-Instruct | 74.55 | 45.00 |
292
+ | Bielik-11B-v2.2-Instruct | 72.35 | 39.65 |
293
+ | **Bielik-11B-v2.0-Instruct** | **72.10** | **40.20** |
294
+ | Mistral-7B-Instruct-v0.2 | 70.00 | 36.20 |
295
+
296
+ The results show that Bielik-11B-v2.0-Instruct performs well on the MixEval benchmark, achieving a score of 72.10 on the standard MixEval and 40.20 on MixEval-Hard. Notably, Bielik-11B-v2.0-Instruct significantly outperforms Mistral-7B-Instruct-v0.2 on both metrics, demonstrating its improved capabilities despite being based on a similar architecture.
297
+
298
+
299
+ ### Chat Arena PL
300
+
301
+ Chat Arena PL is a human-evaluated benchmark that provides a direct comparison of model performance through head-to-head battles. Unlike the automated benchmarks mentioned above, this evaluation relies on human judgment to assess the quality and effectiveness of model responses. The results offer valuable insights into how different models perform in real-world, conversational scenarios as perceived by human evaluators.
302
+
303
+ Results accessed on 2024-08-26.
304
+
305
+ | # | Model | Battles | Won | Lost | Draws | Win % | ELO |
306
+ |---|-------|-------|---------|-----------|--------|-------------|-----|
307
+ | 1 | Bielik-11B-v2.2-Instruct | 92 | 72 | 14 | 6 | 83.72% | 1234 |
308
+ | 2 | Bielik-11B-v2.1-Instruct | 240 | 171 | 50 | 19 | 77.38% | 1174 |
309
+ | 3 | gpt-4o-mini | 639 | 402 | 117 | 120 | 77.46% | 1141 |
310
+ | 4 | Mistral Large 2 (2024-07) | 324 | 188 | 69 | 67 | 73.15% | 1125 |
311
+ | 5 | Llama-3.1-405B | 548 | 297 | 144 | 107 | 67.35% | 1090 |
312
+ | 6 | **Bielik-11B-v2.0-Instruct** | 1289 | 695 | 352 | 242 | 66.38% | 1059 |
313
+ | 7 | Llama-3.1-70B | 498 | 221 | 187 | 90 | 54.17% | 1033 |
314
+ | 8 | Bielik-1-7B | 2041 | 1029 | 638 | 374 | 61.73% | 1020 |
315
+ | 9 | Mixtral-8x22B-v0.1 | 432 | 166 | 167 | 99 | 49.85% | 1018 |
316
+ | 10 | Qwen2-72B | 451 | 179 | 177 | 95 | 50.28% | 1011 |
317
+ | 11 | gpt-3.5-turbo | 2186 | 1007 | 731 | 448 | 57.94% | 1008 |
318
+ | 12 | Llama-3.1-8B | 440 | 155 | 227 | 58 | 40.58% | 975 |
319
+ | 13 | Mixtral-8x7B-v0.1 | 1997 | 794 | 804 | 399 | 49.69% | 973 |
320
+ | 14 | Llama-3-70b | 2008 | 733 | 909 | 366 | 44.64% | 956 |
321
+ | 15 | Mistral Nemo (2024-07) | 301 | 84 | 164 | 53 | 33.87% | 954 |
322
+ | 16 | Llama-3-8b | 1911 | 473 | 1091 | 347 | 30.24% | 909 |
323
+ | 17 | gemma-7b-it | 1928 | 418 | 1221 | 289 | 25.5% | 888 |
324
+
325
+
326
+ ## Limitations and Biases
327
+
328
+ Bielik-11B-v2.0-Instruct is a quick demonstration that the base model can be easily fine-tuned to achieve compelling and promising performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community in ways to make the model respect guardrails, allowing for deployment in environments requiring moderated outputs.
329
+
330
+ Bielik-11B-v2.0-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v2.0-Instruct was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs.
331
+
332
+ ## Citation
333
+ Please cite this model using the following format:
334
+
335
+ ```
336
+ @misc{Bielik11Bv20i,
337
+ title = {Bielik-11B-v2.0-Instruct model card},
338
+ author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof and {SpeakLeash Team} and {Cyfronet Team}},
339
+ year = {2024},
340
+ url = {https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct},
341
+ note = {Accessed: 2024-09-10}, % change this date
342
+ urldate = {2024-09-10} % change this date
343
+ }
344
+ @unpublished{Bielik11Bv20a,
345
+ author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof},
346
+ title = {Bielik: A Family of Large Language Models for the Polish Language - Development, Insights, and Evaluation},
347
+ year = {2024},
348
+ }
349
+ @misc{ociepa2024bielik7bv01polish,
350
+ title={Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation},
351
+ author={Krzysztof Ociepa and Łukasz Flis and Krzysztof Wróbel and Adrian Gwoździej and Remigiusz Kinas},
352
+ year={2024},
353
+ eprint={2410.18565},
354
+ archivePrefix={arXiv},
355
+ primaryClass={cs.CL},
356
+ url={https://arxiv.org/abs/2410.18565},
357
+ }
358
+ ```
359
+
360
+ ## Responsible for training the model
361
+
362
+ * [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training
363
+ * [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training
364
+ * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - conceptualizing and coordinating DPO training, data preparation
365
+ * [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data preparation and ensuring data quality
366
+ * [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks
367
+
368
+
369
+ The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model:
370
+ [Sebastian Kondracki](https://www.linkedin.com/in/sebastian-kondracki/),
371
+ [Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/),
372
+ [Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/),
373
+ [Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/),
374
+ [Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/),
375
+ [Maria Filipkowska](https://www.linkedin.com/in/maria-filipkowska/),
376
+ [Jan Maria Kowalski](https://www.linkedin.com/in/janmariakowalski/),
377
+ [Karol Jezierski](https://www.linkedin.com/in/karol-jezierski/),
378
+ [Kacper Milan](https://www.linkedin.com/in/kacper-milan/),
379
+ [Jan Sowa](https://www.linkedin.com/in/janpiotrsowa/),
380
+ [Len Krawczyk](https://www.linkedin.com/in/magdalena-krawczyk-7810942ab/),
381
+ [Marta Seidler](https://www.linkedin.com/in/marta-seidler-751102259/),
382
+ [Agnieszka Ratajska](https://www.linkedin.com/in/agnieszka-ratajska/),
383
+ [Krzysztof Koziarek](https://www.linkedin.com/in/krzysztofkoziarek/),
384
+ [Szymon Pepliński](http://linkedin.com/in/szymonpeplinski/),
385
+ [Zuzanna Dabić](https://www.linkedin.com/in/zuzanna-dabic/),
386
+ [Filip Bogacz](https://linkedin.com/in/Fibogacci),
387
+ [Agnieszka Kosiak](https://www.linkedin.com/in/agn-kosiak),
388
+ [Izabela Babis](https://www.linkedin.com/in/izabela-babis-2274b8105/),
389
+ [Nina Babis](https://www.linkedin.com/in/nina-babis-00055a140/).
390
+
391
+ Members of the ACK Cyfronet AGH team providing valuable support and expertise:
392
+ [Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/),
393
+ [Marek Magryś](https://www.linkedin.com/in/magrys/),
394
+ [Mieszko Cholewa ](https://www.linkedin.com/in/mieszko-cholewa-613726301/).
395
+
396
+ ## Contact Us
397
+
398
+ If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
bielik-11b-v2.0-instruct.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:937dce15a33468da742e6b8bf8570add12cb39709a03fbbace52605a8fdbc345
3
+ size 6318546944