SamoXXX commited on
Commit
1966c8d
1 Parent(s): 242eed7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -185
README.md CHANGED
@@ -8,211 +8,43 @@ tags:
8
  - gguf
9
  inference: false
10
  ---
11
-
12
  <p align="center">
13
  <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/speakleash_cyfronet.png">
14
  </p>
15
 
16
  # Bielik-7B-Instruct-v0.1-GGUF
17
 
18
- The Bielik-7B-Instruct-v0.1 is an instruct fine-tuned version of the [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1). Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH. The creation and training of the Bielik-7B-Instruct-v0.1 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision.
19
-
20
- ## Model
21
-
22
- The [SpeakLeash](https://speakleash.org/) team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, publicly accessible collections of instructions in English were used - [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) and [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k), which accounted for half of the instructions used in training. The instructions varied in quality, leading to a deterioration in model’s performance. To counteract this while still allowing ourselves to utilize forementioned datasets,several improvements were introduced:
23
- * Weighted tokens level loss - a strategy inspired by [offline reinforcement learning](https://arxiv.org/abs/2005.01643) and [C-RLFT](https://arxiv.org/abs/2309.11235)
24
- * Adaptive learning rate inspired by the study on [Learning Rates as a Function of Batch Size](https://arxiv.org/abs/2006.09092)
25
- * Masked user instructions
26
-
27
- Bielik-7B-Instruct-v0.1 has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo) implemented by [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/). This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way.
28
-
29
- This repo contains GGUF format model files for [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1). GGUF is a new format introduced by the llama.cpp team on August 21st 2023.
30
 
 
31
 
32
  ### Model description:
33
 
34
  * **Developed by:** [SpeakLeash](https://speakleash.org/)
35
  * **Language:** Polish
36
  * **Model type:** causal decoder-only
 
37
  * **Finetuned from:** [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1)
38
  * **License:** CC BY NC 4.0 (non-commercial use)
39
  * **Model ref:** speakleash:e38140bea0d48f1218540800bbc67e89
40
 
41
- ## Training
42
-
43
- * Framework: [ALLaMo](https://github.com/chrisociepa/allamo)
44
- * Visualizations: [W&B](https://wandb.ai)
45
-
46
- <p align="center">
47
- <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/sft_train_loss.png">
48
- </p>
49
- <p align="center">
50
- <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/sft_train_ppl.png">
51
- </p>
52
- <p align="center">
53
- <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/sft_train_lr.png">
54
- </p>
55
-
56
- ### Training hyperparameters:
57
-
58
- | **Hyperparameter** | **Value** |
59
- |-----------------------------|------------------|
60
- | Micro Batch Size | 1 |
61
- | Batch Size | up to 4194304 |
62
- | Learning Rate (cosine, adaptive) | 7e-6 -> 6e-7 |
63
- | Warmup Iterations | 50 |
64
- | All Iterations | 55440 |
65
- | Optimizer | AdamW |
66
- | β1, β2 | 0.9, 0.95 |
67
- | Adam_eps | 1e−8 |
68
- | Weight Decay | 0.05 |
69
- | Grad Clip | 1.0 |
70
- | Precision | bfloat16 (mixed) |
71
-
72
-
73
- ### Instruction format
74
-
75
- In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should start with the beginning of a sentence token. The generated completion will be finished by the end-of-sentence token.
76
-
77
- E.g.
78
- ```
79
- prompt = "<s>[INST] Jakie mamy pory roku? [/INST]"
80
- completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.</s>"
81
- ```
82
-
83
- This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
84
-
85
- ```python
86
- from transformers import AutoModelForCausalLM, AutoTokenizer
87
-
88
- device = "cuda" # the device to load the model onto
89
-
90
- model_name = "speakleash/Bielik-7B-Instruct-v0.1"
91
-
92
- tokenizer = AutoTokenizer.from_pretrained(model_name)
93
- model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
94
-
95
- messages = [
96
- {"role": "user", "content": "Jakie mamy pory roku w Polsce?"},
97
- {"role": "assistant", "content": "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima."},
98
- {"role": "user", "content": "Która jest najcieplejsza?"}
99
- ]
100
-
101
- input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
102
-
103
- model_inputs = input_ids.to(device)
104
- model.to(device)
105
-
106
- generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
107
- decoded = tokenizer.batch_decode(generated_ids)
108
- print(decoded[0])
109
- ```
110
-
111
- ## Evaluation
112
-
113
-
114
- Models have been evaluated on [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) 5-shot. The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Here are presented:
115
- - Average - average score among all tasks normalized by baseline scores
116
- - Reranking - reranking task, commonly used in RAG
117
- - Reader (Generator) - open book question answering task, commonly used in RAG
118
- - Perplexity (lower is better) - as a bonus, does not correlate with other scores and should not be used for model comparison
119
-
120
-
121
-
122
- | | Average | RAG Reranking | RAG Reader | Perplexity |
123
- |--------------------------------------------------------------------------------------|----------:|--------------:|-----------:|-----------:|
124
- | **7B parameters models:** | | | | |
125
- | Baseline (majority class) | 0.00 | 53.36 | - | - |
126
- | Voicelab/trurl-2-7b | 18.85 | 60.67 | 77.19 | 1098.88 |
127
- | meta-llama/Llama-2-7b-chat-hf | 21.04 | 54.65 | 72.93 | 4018.74 |
128
- | mistralai/Mistral-7B-Instruct-v0.1 | 26.42 | 56.35 | 73.68 | 6909.94 |
129
- | szymonrucinski/Curie-7B-v1 | 26.72 | 55.58 | 85.19 | 389.17 |
130
- | HuggingFaceH4/zephyr-7b-beta | 33.15 | 71.65 | 71.27 | 3613.14 |
131
- | HuggingFaceH4/zephyr-7b-alpha | 33.97 | 71.47 | 73.35 | 4464.45 |
132
- | internlm/internlm2-chat-7b-sft | 36.97 | 73.22 | 69.96 | 4269.63 |
133
- | internlm/internlm2-chat-7b | 37.64 | 72.29 | 71.17 | 3892.50 |
134
- | [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1) | 39.28 | 61.89 | **86.00** | 277.92 |
135
- | mistralai/Mistral-7B-Instruct-v0.2 | 40.29 | 72.58 | 79.39 | 2088.08 |
136
- | teknium/OpenHermes-2.5-Mistral-7B | 42.64 | 70.63 | 80.25 | 1463.00 |
137
- | openchat/openchat-3.5-1210 | 44.17 | 71.76 | 82.15 | 1923.83 |
138
- | speakleash/mistral_7B-v2/spkl-all_sft_v2/e1_base/spkl-all_2e6-e1_70c70cc6 | 45.44 | 71.27 | 91.50 | 279.24 |
139
- | Nexusflow/Starling-LM-7B-beta | 45.69 | 74.58 | 81.22 | 1161.54 |
140
- | openchat/openchat-3.5-0106 | 47.32 | 74.71 | 83.60 | 1106.56 |
141
- | berkeley-nest/Starling-LM-7B-alpha | **47.46** | **75.73** | 82.86 | 1438.04 |
142
- | | | | | |
143
- | **Models with different sizes:** | | | | |
144
- | Azurro/APT3-1B-Instruct-v1 (1B) | -13.80 | 52.11 | 12.23 | 739.09 |
145
- | Voicelab/trurl-2-13b-academic (13B) | 29.45 | 68.19 | 79.88 | 733.91 |
146
- | upstage/SOLAR-10.7B-Instruct-v1.0 (10.7B) | 46.07 | 76.93 | 82.86 | 789.58 |
147
- | | | | | |
148
- | **7B parameters pretrained and continously pretrained models:** | | | | |
149
- | OPI-PG/Qra-7b | 11.13 | 54.40 | 75.25 | 203.36 |
150
- | meta-llama/Llama-2-7b-hf | 12.73 | 54.02 | 77.92 | 850.45 |
151
- | internlm/internlm2-base-7b | 20.68 | 52.39 | 69.85 | 3110.92 |
152
- | [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1) | 29.38 | 62.13 | **88.39** | 123.31 |
153
- | mistralai/Mistral-7B-v0.1 | 30.67 | 60.35 | 85.39 | 857.32 |
154
- | internlm/internlm2-7b | 33.03 | 69.39 | 73.63 | 5498.23 |
155
- | alpindale/Mistral-7B-v0.2-hf | 33.05 | 60.23 | 85.21 | 932.60 |
156
- | speakleash/mistral-apt3-7B/spi-e0_hf | 35.50 | 62.14 | **87.48** | 132.78 |
157
-
158
- SpeakLeash models have one of the best scores in the RAG Reader task.
159
- We have managed to increase Average score by almost 9 pp. in comparison to Mistral-7B-v0.1.
160
- In our subjective evaluations of chatting skills SpeakLeash models perform better than other models with higher Average scores.
161
-
162
-
163
-
164
- ## Limitations and Biases
165
-
166
- Bielik-7B-Instruct-v0.1 is a quick demonstration that the base model can be easily fine-tuned to achieve compelling and promising performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community in ways to make the model respect guardrails, allowing for deployment in environments requiring moderated outputs.
167
-
168
- Bielik-7B-Instruct-v0.1 can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-7B-Instruct-v0.1 was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs.
169
-
170
- ## License
171
-
172
- Because of an unclear legal situation, we have decided to publish the model under CC BY NC 4.0 license - it allows for non-commercial use. The model can be used for scientific purposes and privately, as long as the license conditions are met.
173
-
174
- ## Citation
175
- Please cite this model using the following format:
176
-
177
- ```
178
- @misc{Bielik7Bv01,
179
- title = {Introducing Bielik-7B-Instruct-v0.1: Instruct Polish Language Model},
180
- author = {Ociepa, Krzysztof and Flis, Łukasz and Wróbel, Krzysztof and Kondracki, Sebastian and {SpeakLeash Team} and {Cyfronet Team}},
181
- year = {2024},
182
- url = {https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1},
183
- note = {Accessed: 2024-04-01}, % change this date
184
- urldate = {2024-04-01} % change this date
185
- }
186
- ```
187
-
188
- ## Responsible for training the model
189
 
190
- * [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training
191
- * [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training
192
- * [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks
193
- * [Sebastian Kondracki](https://www.linkedin.com/in/sebastian-kondracki/)<sup>SpeakLeash</sup> - coordinating and preparation of instructions
194
- * [Maria Filipkowska](https://www.linkedin.com/in/maria-filipkowska/)<sup>SpeakLeash</sup> - preparation of instructions
195
- * [Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/)<sup>SpeakLeash</sup> - preparation of instructions
196
- * [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data quality and instructions cleaning
197
- * [Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/)<sup>SpeakLeash</sup> - instructions cleaning
198
- * [Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/)<sup>SpeakLeash</sup> - instructions cleaning
199
 
200
- The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model through their commitment to the open-science SpeakLeash project:
201
- [Grzegorz Urbanowicz](https://www.linkedin.com/in/grzegorz-urbanowicz-05823469/),
202
- [Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/),
203
- [Paweł Cyrta](https://www.linkedin.com/in/cyrta),
204
- [Jan Maria Kowalski](https://www.linkedin.com/in/janmariakowalski/),
205
- [Karol Jezierski](https://www.linkedin.com/in/karol-jezierski/),
206
- [Kamil Nonckiewicz](https://www.linkedin.com/in/kamil-nonckiewicz/),
207
- [Izabela Babis](https://www.linkedin.com/in/izabela-babis-2274b8105/),
208
- [Nina Babis](https://www.linkedin.com/in/nina-babis-00055a140/),
209
- [Waldemar Boszko](https://www.linkedin.com/in/waldemarboszko),
210
- [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/),
211
- and many other wonderful researchers and enthusiasts of the AI world.
212
 
213
- Members of the ACK Cyfronet AGH team:
214
- [Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/).
215
 
216
  ## Contact Us
217
 
218
- If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/3G9DVM39).
 
8
  - gguf
9
  inference: false
10
  ---
 
11
  <p align="center">
12
  <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/speakleash_cyfronet.png">
13
  </p>
14
 
15
  # Bielik-7B-Instruct-v0.1-GGUF
16
 
17
+ This repo contains GGUF format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1).
 
 
 
 
 
 
 
 
 
 
 
18
 
19
+ <b><u>WARNING: Remember that quantised models show reduced response quality and possible hallucinations!</u></b><br>
20
 
21
  ### Model description:
22
 
23
  * **Developed by:** [SpeakLeash](https://speakleash.org/)
24
  * **Language:** Polish
25
  * **Model type:** causal decoder-only
26
+ * **Quant from:** [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1)
27
  * **Finetuned from:** [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1)
28
  * **License:** CC BY NC 4.0 (non-commercial use)
29
  * **Model ref:** speakleash:e38140bea0d48f1218540800bbc67e89
30
 
31
+ ### About GGUF
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023.
 
 
 
 
 
 
 
 
34
 
35
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
36
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
37
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
38
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
39
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
40
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows, macOS (Silicon) and Linux, with GPU acceleration
41
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
42
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
43
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
44
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
45
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note ctransformers has not been updated in a long time and does not support many recent models.
 
46
 
 
 
47
 
48
  ## Contact Us
49
 
50
+ If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/3G9DVM39).