--- license: apache-2.0 language: - pl library_name: transformers inference: parameters: temperature: 0.9 ---

# Bielik-11B-v2 Bielik-11B-v2 is a generative text model featuring 11 billion parameters. It is initialized from its predecessor, Mistral-7B-v0.2, and trained on 400 billion tokens. The aforementioned model stands as a testament to the unique collaboration between the open-science/open-source project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which have been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC center: ACK Cyfronet AGH. The creation and training of the Bielik-11B-v2 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision. ⚠️ This is a base model intended for further fine-tuning across most use cases. If you're looking for a model ready for chatting or following instructions out-of-the-box, please use [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). 🎥 Demo: https://chat.bielik.ai 🗣️ Chat Arena*: https://arena.speakleash.org.pl/ *Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality. ## Model Bielik-11B-v2 has been trained with [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) using different parallelization techniques. The model training was conducted on the Helios Supercomputer at the ACK Cyfronet AGH, utilizing 256 NVidia GH200 cards. The training dataset was composed of Polish texts collected and made available through the [SpeakLeash](https://speakleash.org/) project as well as a part of the [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We used 200 billion tokens for two epochs of training. ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Initialized from:** [Mistral-7B-v0.2](https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar) * **License:** Apache 2.0 (commercial use allowed) * **Model ref:** speakleash:45b6efdb701991181a05968fc53d2a8e ### Quality evaluation An XGBoost classification model was prepared and created to evaluate the quality of texts in native Polish language. It is based on 93 features, such as the ratio of out-of-vocabulary words to all words (OOVs), the number of nouns, verbs, average sentence length etc. The model outputs the category of a given document (either HIGH, MEDIUM or LOW) along with the probability. This approach allows implementation of a dedicated pipeline to choose documents, from which we've used entries with HIGH quality index and probability exceeding 90%. This filtration and appropriate selection of texts enable the provision of a condensed and high-quality database of texts in Polish for training purposes. ### Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality. ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "speakleash/Bielik-11B-v2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` In order to reduce the memory usage, you can use smaller precision (`bfloat16`). ```python import torch model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) ``` And then you can use HuggingFace Pipelines to generate text: ```python import transformers text = "Najważniejszym celem człowieka na ziemi jest" pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=text) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` Generated output: > Najważniejszym celem człowieka na ziemi jest życie w pokoju, harmonii i miłości. Dla każdego z nas bardzo ważne jest, aby otaczać się kochanymi osobami. ## Evaluation Models have been evaluated on two leaderboards: [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) and [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The Open PL LLM Leaderboard uses a 5-shot evaluation and focuses on NLP tasks in Polish, while the Open LLM Leaderboard evaluates models on various English language tasks. ### Open PL LLM Leaderboard The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Average column is an average score among all tasks normalized by baseline scores. | Model | Parameters (B) | Average | |------------------------|------------|---------| | Qwen2-72B | 72 | 65.76 | | Meta-Llama-3-70B | 70 | 60.87 | | Meta-Llama-3.1-70B | 70 | 60.39 | | Mixtral-8x22B-v0.1 | 141 | 59.95 | | Qwen1.5-72B | 72 | 59.94 | | Qwen1.5-32B | 32 | 57.34 | | **Bielik-11B-v2** | **11** | **56.61** | | Qwen2-7B | 7 | 48.75 | | Mistral-Nemo-Base-2407 | 12 | 46.15 | | SOLAR-10.7B-v1.0 | 10.7 | 46.04 | | internlm2-20b | 20 | 45.98 | | Meta-Llama-3.1-8B | 8 | 42.79 | | Meta-Llama-3-8B | 8 | 42.40 | | Mistral-7B-v0.2 | 7 | 37.20 | | Bielik-7B-v0.1 | 7 | 33.78 | | Qra-13b | 13 | 33.71 | | Qra-7b | 7 | 16.09 | The results from the Open PL LLM Leaderboard show that the Bielik-11B-v2 model, with 11 billion parameters, achieved an average score of 56.61. This makes it the best performing model among those under 20B parameters, outperforming the second-best model in this category by an impressive 8 percentage points. This significant lead not only places it ahead of its predecessor, the Bielik-7B-v0.1 (which scored 33.78), but also demonstrates its superiority over other larger models. The substantial improvement highlights the remarkable advancements and optimizations made in this newer version. Other Polish models listed include Qra-13b and Qra-7b, scoring 33.71 and 16.09 respectively, indicating that Bielik-11B-v2 outperforms these models by a considerable margin. Additionally, the Bielik-11B-v2 was initialized from the weights of Mistral-7B-v0.2, which itself scored 37.20, further demonstrating the effective enhancements incorporated into the Bielik-11B-v2 model. ### Open LLM Leaderboard The Open LLM Leaderboard evaluates models on various English language tasks, providing insights into the model's performance across different linguistic challenges. | Model | AVG | arc_challenge | hellaswag | truthfulqa_mc2 | mmlu | winogrande | gsm8k | |-------------------------|-------|---------------|-----------|----------------|-------|------------|-------| | **Bielik-11B-v2** | **65.87** | 60.58 | 79.84 | 46.13 | 63.06 | 77.82 | 67.78 | | Mistral-7B-v0.2 | 60.37 | 60.84 | 83.08 | 63.62 | 41.76 | 78.22 | 34.72 | | Bielik-7B-v0.1 | 49.98 | 45.22 | 67.92 | 47.16 | 43.20 | 66.85 | 29.49 | The results from the Open LLM Leaderboard demonstrate the impressive performance of Bielik-11B-v2 across various NLP tasks. With an average score of 65.87, it significantly outperforms its predecessor, Bielik-7B-v0.1, and even surpasses Mistral-7B-v0.2, which served as its initial weight basis. Key observations: 1. Bielik-11B-v2 shows substantial improvements in most categories compared to Bielik-7B-v0.1, highlighting the effectiveness of the model's enhancements. 2. It performs exceptionally well in tasks like hellaswag (common sense reasoning), winogrande (commonsense reasoning), and gsm8k (mathematical problem-solving), indicating its versatility across different types of language understanding and generation tasks. 3. The model shows particular strength in MMLU (massive multitask language understanding), scoring 63.06 compared to Mistral-7B-v0.2's 41.76, demonstrating its broad knowledge base and understanding capabilities. 4. While Mistral-7B-v0.2 outperforms in truthfulqa_mc2, Bielik-11B-v2 maintains competitive performance in this truth-discernment task. Although Bielik-11B-v2 was primarily trained on Polish data, it has retained and even improved its ability to understand and operate in English, as evidenced by its strong performance across these English-language benchmarks. This suggests that the model has effectively leveraged cross-lingual transfer learning, maintaining its Polish language expertise while enhancing its English language capabilities. ## Limitations and Biases Bielik-11B-v2 is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent. Bielik-11B-v2 can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v2 was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs. ## License The model is licensed under Apache 2.0, which allows for commercial use. ## Citation Please cite this model using the following format: ``` @misc{Bielik11Bv2b, title = {Bielik-11B-v2 model card}, author = {Ociepa, Krzysztof and Flis, Łukasz and Wróbel, Krzysztof and Gwoździej, Adrian and {SpeakLeash Team} and {Cyfronet Team}}, year = {2024}, url = {https://huggingface.co/speakleash/Bielik-11B-v2}, note = {Accessed: 2024-08-28}, urldate = {2024-08-28} } @unpublished{Bielik11Bv2a, author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof}, title = {Bielik: A Family of Large Language Models for the Polish Language – Development, Insights, and Evaluation}, year = {2024}, } ``` ## Responsible for training the model * [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)SpeakLeash - team leadership, conceptualizing, data preparation, process optimization and oversight of training * [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)Cyfronet AGH - coordinating and supervising the training * [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)SpeakLeash - data cleaning and quality * [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)SpeakLeash - benchmarks The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model: [Grzegorz Urbanowicz](https://www.linkedin.com/in/grzegorz-urbanowicz-05823469/), [Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/), [Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/), [Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/), [Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/), [Aleksander Smywiński-Pohl](https://www.linkedin.com/in/apohllo/). Members of the ACK Cyfronet AGH team providing valuable support and expertise: [Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/), [Marek Magryś](https://www.linkedin.com/in/magrys/). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.com/invite/TunEeCTw).