Remek's picture
Update README.md
f2e656c verified
|
raw
history blame
No virus
1.76 kB
---
language:
- pl
license: apache-2.0
library_name: transformers
tags:
- finetuned
- gguf
inference: false
pipeline_tag: text-generation
base_model: speakleash/Bielik-11B-v2.2-Instruct
---
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/speakleash_cyfronet.png">
</p>
# Bielik-11B-v2.2-Instruct-MLX-8bit
This model was converted to MLX format from [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct).
**DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!**
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("speakleash/Bielik-11B-v2.2-Instruct-MLX-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
### Model description:
* **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/)
* **Language:** Polish
* **Model type:** causal decoder-only
* **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct)
* **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2)
* **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/)
### Responsible for model quantization
* [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery.
## Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).