|
--- |
|
base_model: bigscience/bloom-3b |
|
inference: false |
|
model_creator: bigscience |
|
model_name: bloom-3b |
|
model_type: bloom |
|
pipeline_tag: text-generation |
|
quantized_by: iproskurina |
|
tags: |
|
- pretrained |
|
license: bigscience-bloom-rail-1.0 |
|
language: |
|
- ak |
|
- ar |
|
- as |
|
- bm |
|
- bn |
|
- ca |
|
- code |
|
- en |
|
- es |
|
- eu |
|
- fon |
|
- fr |
|
- gu |
|
- hi |
|
- id |
|
- ig |
|
- ki |
|
- kn |
|
- lg |
|
- ln |
|
- ml |
|
- mr |
|
- ne |
|
- nso |
|
- ny |
|
- or |
|
- pa |
|
- pt |
|
- rn |
|
- rw |
|
- sn |
|
- st |
|
- sw |
|
- ta |
|
- te |
|
- tn |
|
- ts |
|
- tum |
|
- tw |
|
- ur |
|
- vi |
|
- wo |
|
- xh |
|
- yo |
|
- zh |
|
- zhs |
|
- zht |
|
- zu |
|
datasets: |
|
- c4 |
|
--- |
|
|
|
|
|
|
|
# 🌸 BLOOM 3B - GPTQ |
|
- Model creator: [BigScience](https://huggingface.co/bigscience) |
|
- Original model: [BLOOM 3B](https://huggingface.co/bigscience/bloom-3b) |
|
|
|
The model published in this repo was quantized to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ). |
|
|
|
**Quantization details** |
|
|
|
**All quantization parameters were taken from [GPTQ paper](https://arxiv.org/abs/2210.17323).** |
|
|
|
GPTQ calibration data consisted of 128 random 2048 token segments from the [C4 dataset](https://huggingface.co/datasets/c4). |
|
|
|
The grouping size used for quantization is equal to 128. |
|
|
|
## How to use this GPTQ model from Python code |
|
|
|
### Install the necessary packages |
|
|
|
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. |
|
|
|
```shell |
|
pip3 install --upgrade transformers optimum |
|
# If using PyTorch 2.1 + CUDA 12.x: |
|
pip3 install --upgrade auto-gptq |
|
# or, if using PyTorch 2.1 + CUDA 11.x: |
|
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ |
|
``` |
|
|
|
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: |
|
|
|
```shell |
|
pip3 uninstall -y auto-gptq |
|
git clone https://github.com/PanQiWei/AutoGPTQ |
|
cd AutoGPTQ |
|
git checkout v0.5.1 |
|
pip3 install . |
|
``` |
|
|
|
### You can then use the following code |
|
|
|
```python |
|
|
|
from transformers import AutoTokenizer, TextGenerationPipeline,AutoModelForCausalLM |
|
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig |
|
pretrained_model_dir = "iproskurina/bloom-3b-gptq-4bit" |
|
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True) |
|
model = AutoGPTQForCausalLM.from_quantized(pretrained_model_dir, device="cuda:0", model_basename="model") |
|
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer) |
|
print(pipeline("auto-gptq is")[0]["generated_text"]) |
|
``` |