meta-llama/Llama-3.2-1B (Quantized)

Description

This model is a quantized version of the original model meta-llama/Llama-3.2-1B. It was quantized using Bitsandbytes.

Quantization Details

  • Quantization Parameters: BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="fp4", bnb_4bit_compute_dtype= "bfloat16")

Usage

You can use this model in your applications by loading it directly from the Hugging Face Hub.

In order to run the inference with Llama-3.2-1B-BNB-FP4-BF16, torch andbitsandbytes need to be installed as:

pip install torch bitsandbytes --upgrade

Then, preferably the latest version of transformers need to be installed, as:

pip install transformers[accelerate] --upgrade

To run the inference the model can be instantiated as any other causal language modeling model via AutoModelForCausalLM and run the inference normally.

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Llama-3.2-1B-BNB-FP4-BF16")
Downloads last month
4
Safetensors
Model size
764M params
Tensor type
F32
·
FP16
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for HF-Quantization/Llama-3.2-1B-BNB-FP4-BF16

Quantized
(111)
this model

Collection including HF-Quantization/Llama-3.2-1B-BNB-FP4-BF16