Qwen2.5-7B-Instruct-GPTQ-Int4

This version of Qwen2.5-7B-Instruct-GPTQ-Int4 has been converted to run on the Axera NPU using w4a16 quantization.

This model has been optimized with the following LoRA:

Compatible with Pulsar2 version: 3.4(Not released yet)

Convert tools links:

For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4

Pulsar2 Link, How to Convert LLM from Huggingface to axmodel

AXera NPU LLM Runtime

Support Platform

Chips w8a16 w4a16
AX650 2.6 tokens/sec 4.8 tokens/sec

How to use

Download all files from this repository to the device

root@ax650:/mnt/qtang/llm-test/qwen2.5-7b# tree -L 1
.
β”œβ”€β”€ qwen2.5-7b-gptq-int4-ax650
β”œβ”€β”€ qwen2.5_tokenizer
β”œβ”€β”€ qwen2.5_tokenizer.py
β”œβ”€β”€ main_axcl_aarch64
β”œβ”€β”€ main_axcl_x86
β”œβ”€β”€ main_prefill
β”œβ”€β”€ post_config.json
β”œβ”€β”€ run_qwen2.5_7b_gptq_int4_ax650.sh
β”œβ”€β”€ run_qwen2.5_7b_gptq_int4_axcl_aarch64.sh
└── run_qwen2.5_7b_gptq_int4_axcl_x86.sh

Start the Tokenizer service

root@ax650:/mnt/qtang/llm-test/qwen2.5-7b# python qwen2.5_tokenizer.py --port 12345
None None 151645 <|im_end|>
<|im_start|>system
You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>
<|im_start|>user
hello world<|im_end|>
<|im_start|>assistant

[151644, 8948, 198, 2610, 525, 1207, 16948, 11, 3465, 553, 54364, 14817, 13, 1446, 525, 264, 10950, 17847, 13, 151645, 198, 151644, 872, 198, 14990,
http://localhost:12345

Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board

Open another terminal and run run_qwen2.5_7b_gptq_int4_ax650.sh

root@ax650:/mnt/qtang/llm-test/qwen2.5-7b# ./run_qwen2.5_7b_gptq_int4_ax650.sh
[I][                            Init][ 125]: LLM init start
bos_id: -1, eos_id: 151645
  3% | β–ˆβ–ˆ                                |   1 /  31 [0.00s<0.09s, 333.33 count/s] tokenizer init ok
100% | β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ |  31 /  31 [45.25s<45.25s, 0.69 count/s] init post axmodel ok,remain_cmm(7664 MB)[I][
[I][                            Init][ 246]: kv_cache_size : 512, kv_cache_num: 1024
[I][                            Init][ 254]: prefill_token_num : 128
[I][                     load_config][ 281]: load config:
{
    "enable_repetition_penalty": false,
    "enable_temperature": true,
    "enable_top_k_sampling": true,
    "enable_top_p_sampling": false,
    "penalty_window": 20,
    "repetition_penalty": 1.2,
    "temperature": 0.9,
    "top_k": 10,
    "top_p": 0.8
}

[I][                            Init][ 268]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
>> 1+1=?
[I][                             Run][ 466]: ttft: 1138.88 ms
1+1 equals 2.
[N][                             Run][ 605]: hit eos,avg 4.65 token/s

>> who are you
[I][                             Run][ 466]: ttft: 1137.90 ms
I'm Qwen, a large language model created by Alibaba Cloud. How can I assist you today?
[N][                             Run][ 605]: hit eos,avg 4.52 token/s

Inference with M.2 Accelerator card

What is M.2 Accelerator card?, Show this DEMO based on Raspberry PI 5.

(base) axera@raspberrypi:~/samples/qwen2.5-7b $ ./run_qwen2.5_7b_gptq_int4_axcl_aarch64.sh
build time: Feb 13 2025 15:15:07
[I][                            Init][ 111]: LLM init start
bos_id: -1, eos_id: 151645
100% | β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ |  31 /  31 [67.43s<67.43s, 0.46 count/s] init post axmodel okremain_cmm(2739 MB)
[I][                            Init][ 226]: max_token_len : 1024
[I][                            Init][ 231]: kv_cache_size : 512, kv_cache_num: 1024
[I][                     load_config][ 282]: load config:
{
    "enable_repetition_penalty": false,
    "enable_temperature": true,
    "enable_top_k_sampling": true,
    "enable_top_p_sampling": false,
    "penalty_window": 20,
    "repetition_penalty": 1.2,
    "temperature": 0.9,
    "top_k": 10,
    "top_p": 0.8
}

[I][                            Init][ 288]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
>> who are you
I am Qwen, a large language model created by Alibaba Cloud. I'm here to help you with any questions or tasks you might have!
[N][                             Run][ 610]: hit eos,avg 4.33 token/s

>> 1+1=?
1+1 equals 2.
[N][                             Run][ 610]: hit eos,avg 4.54 token/s

>> q

(base) axera@raspberrypi:~ $ axcl-smi
+------------------------------------------------------------------------------------------------+
| AXCL-SMI  V2.26.0_20250206225448                                Driver  V2.26.0_20250206225448 |
+-----------------------------------------+--------------+---------------------------------------+
| Card  Name                     Firmware | Bus-Id       |                          Memory-Usage |
| Fan   Temp                Pwr:Usage/Cap | CPU      NPU |                             CMM-Usage |
|=========================================+==============+=======================================|
+-----------------------------------------+--------------+---------------------------------------+
|    0  AX650N                    V2.26.0 | 0000:05:00.0 |                175 MiB /      945 MiB |
|   --   61C                      -- / -- | 0%        0% |               4301 MiB /     7040 MiB |
+-----------------------------------------+--------------+---------------------------------------+

+------------------------------------------------------------------------------------------------+
| Processes:                                                                                     |
| Card      PID  Process Name                                                   NPU Memory Usage |
|================================================================================================|
|    0    63118  /home/axera/samples/qwen2.5-7b-gptq-int4/main_axcl_aarch64          4316448 KiB |
+------------------------------------------------------------------------------------------------+
Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for AXERA-TECH/Qwen2.5-7B-Instruct-GPTQ-Int4

Base model

Qwen/Qwen2.5-7B
Finetuned
(1)
this model

Collection including AXERA-TECH/Qwen2.5-7B-Instruct-GPTQ-Int4