onekq's picture
Update README.md
44b24da verified
metadata
library_name: transformers
license: apache-2.0
base_model:
  - ibm-granite/granite-34b-code-base-8k

Bitsandbytes quantization of https://huggingface.co/ibm-granite/granite-34b-code-base-8k.

See https://huggingface.co/blog/4bit-transformers-bitsandbytes for instructions.

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import BitsAndBytesConfig
import torch

# Define the 4-bit configuration
nf4_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16
)

# Load the pre-trained model with the 4-bit quantization configuration
model = AutoModelForCausalLM.from_pretrained("ibm-granite/granite-34b-code-base-8k", quantization_config=nf4_config)

# Load the tokenizer associated with the model
tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-34b-code-base-8k")

# Push the model and tokenizer to the Hugging Face hub
model.push_to_hub("onekq-ai/granite-34b-code-base-8k-bnb-4bit", use_auth_token=True)
tokenizer.push_to_hub("onekq-ai/granite-34b-code-base-8k-bnb-4bit", use_auth_token=True)