VPTQ-community

community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

Disclaimer:

VPTQ-community is a open source community to reproduced models on the paper VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models github

It is intended only for experimental purposes.

Users are responsible for any consequences arising from the use of this model.

VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models

TL;DR

Vector Post-Training Quantization (VPTQ) is a novel Post-Training Quantization method that leverages Vector Quantization to high accuracy on LLMs at an extremely low bit-width (<2-bit). VPTQ can compress 70B, even the 405B model, to 1-2 bits without retraining and maintain high accuracy.

  • Better Accuracy on 1-2 bits
  • Lightweight Quantization Algorithm: only cost ~17 hours to quantize 405B Llama-3.1
  • Agile Quantization Inference: low decode overhead, best throughput, and TTFT

Example: Run Llama 3.1 70b on RTX4090 (24G @ ~2bits) in real time Llama3 1-70b-prompt

Tech Report

Scaling model size significantly challenges the deployment and inference of Large Language Models (LLMs). Due to the redundancy in LLM weights, recent research has focused on pushing weight-only quantization to extremely low-bit (even down to 2 bits). It reduces memory requirements, optimizes storage costs, and decreases memory bandwidth needs during inference. However, due to numerical representation limitations, traditional scalar-based weight quantization struggles to achieve such extreme low-bit. Recent research on Vector Quantization (VQ) for LLMs has demonstrated the potential for extremely low-bit model quantization by compressing vectors into indices using lookup tables.

Read tech report at Tech Report and arXiv Paper

Models from Open Source Community

āš ļø The repository only provides a method of model quantization algorithm.

āš ļø The open-source community VPTQ-community provides models based on the technical report and quantization algorithm.

āš ļø The repository cannot guarantee the performance of those models.

Quick Estimation of Model Bitwidth (Excluding Codebook Overhead):

  • Model Naming Convention: The model's name includes the vector length $v$, codebook (lookup table) size, and residual codebook size. For example, "Meta-Llama-3.1-70B-Instruct-v8-k65536-256-woft" and "Meta-Llama-3.1-70B-Instruct", where:

    • Vector Length: 8
    • Number of Centroids: 65536 (2^16)
    • Number of Residual Centroids: 256 (2^8)
  • Equivalent Bitwidth Calculation:

    • Index: log2(65536) = 16 / 8 = 2 bits
    • Residual Index: log2(256) = 8 / 8 = 1 bit
    • Total Bitwidth: 2 + 1 = 3 bits
  • Model Size Estimation: 70B * 3 bits / 8 bits per Byte = 26.25 GB

  • Note: This estimate does not include the size of the codebook (lookup table), other parameter overheads, and the padding overhead for storing indices. For the detailed calculation method, please refer to Tech Report Appendix C.2.

Model Series Collections (Estimated) Bit per weight
Llama 3.3 70B Instruct HF šŸ¤— 4 bits 3 bits 2 bits (1) 2 bits (2) 1.875 bits 1.625 bits
Llama 3.1 Nemotron 70B Instruct HF HF šŸ¤— 4 bits 3 bits 2 bits (1) 2 bits (2) 1.875 bits 1.625 bits 1.5 bits
Llama 3.1 8B Instruct HF šŸ¤— 4 bits 3.5 bits 3 bits 2.3 bits
Llama 3.1 70B Instruct HF šŸ¤— 4 bits 3 bits 2.25 bits 2 bits (1) 2 bits (2) 1.93 bits 1.875 bits 1.75 bits
Llama 3.1 405B Instruct HF šŸ¤— 4 bits 3 bits 2 bits 1.875 bits 1.625 bits 1.5 bits (1) 1.5 bits (2) 1.43 bits 1.375 bits
Mistral Large Instruct 2407 (123B) HF šŸ¤— 4 bits 3 bits 2 bits (1) 2 bits (2) 1.875 bits 1.75 bits 1.625 bits 1.5 bits
Qwen 2.5 7B Instruct HF šŸ¤— 4 bits 3 bits 2 bits (1) 2 bits (2) 2 bits (3)
Qwen 2.5 14B Instruct HF šŸ¤— 4 bits 3 bits 2 bits (1) 2 bits (2) 2 bits (3)
Qwen 2.5 32B Instruct HF šŸ¤— 4 bits 3 bits 2 bits (1) 2 bits (2) 2 bits (3)
Qwen 2.5 72B Instruct HF šŸ¤— 4 bits 3 bits 2.38 bits 2.25 bits (1) 2.25 bits (2) 2 bits (1) 2 bits (2) 1.94 bits
Reproduced from the tech report HF šŸ¤— Results from the open source community for reference only, please use them responsibly.
Hessian and Inverse Hessian Matrix HF šŸ¤— Collected from RedPajama-Data-1T-Sample, following Quip#

A Space Demo

A live-chatbot is created with VPTQ-LLM-2bit demo over VPTQ.

datasets

None public yet