--- library_name: transformers license: llama3.2 base_model: - meta-llama/Llama-3.2-3B-Instruct --- # This model has been xMADified! This repository contains [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) quantized from 16-bit floats to 4-bit integers, using xMAD.ai proprietary technology. # Why should I use this model? 1. **Accuracy**: This xMADified model is the best quantized version of the `meta-llama/Llama-3.2-3B-Instruct` model. We are on par with the original (fp16) model (see _Table 1_ below). 2. **Memory-efficiency**: This xMADified model (3 GB) is >50% less memory than the full-precision model (6.5 GB). You can run this on any laptop GPU. 3. **Fine-tuning**: These models are fine-tunable over the same reduced (3 GB) hardware in mere 3-clicks. Watch our product demo [here](https://www.youtube.com/watch?v=S0wX32kT90s&list=TLGGL9fvmJ-d4xsxODEwMjAyNA) ## Table 1: xMAD vs. Meta | | MMLU | Arc Challenge | Arc Easy | LAMBADA Standard | LAMBADA OpenAI | PIQA | Winogrande | HellaSwag | | ----------------------------------------------------------------------------------------------------------- | --------- | ------------- | --------- | ---------------- | -------------- | --------- | ---------- | --------- | | [xmadai/Llama-3.2-3B-Instruct-xMADai-INT4](https://huggingface.co/xmadai/Llama-3.2-3B-Instruct-xMADai-INT4) | **58.60** | **39.93** | **72.10** | **53.77** | **62.49** | **74.27** | **63.69** | **51.28** | | [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) | 60.48 | 43.69 | 74.24 | 57.75 | 66.54 | 75.73 | 67.40 | 52.20 | # How to Run Model Loading the model checkpoint of this xMADified model requires less than 3 GiB of VRAM. Hence it can be efficiently run on most laptop GPUs. **Package prerequisites**: Run the following commands to install the required packages. ```bash pip install torch==2.4.0 # Run following if you have CUDA version 11.8: pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu118 pip install transformers accelerate optimum pip install -vvv --no-build-isolation "git+https://github.com/PanQiWei/AutoGPTQ.git@v0.7.1" ``` **Sample Inference Code** ```python from transformers import AutoTokenizer from auto_gptq import AutoGPTQForCausalLM model_id = "xmadai/Llama-3.2-3B-Instruct-xMADai-INT4" prompt = [ {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."}, {"role": "user", "content": "What's Deep Learning?"}, ] tokenizer = AutoTokenizer.from_pretrained(model_id) inputs = tokenizer.apply_chat_template( prompt, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True, ).to("cuda") model = AutoGPTQForCausalLM.from_quantized( model_id, device_map='auto', trust_remote_code=True, ) outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ``` For additional xMADified models, access to fine-tuning, and general questions, please contact us at support@xmad.ai and join our waiting list.