Qwen
Collection
5 items
β’
Updated
This version of Qwen2.5-3B-Instruct-GPTQ-Int4 has been converted to run on the Axera NPU using w4a16 quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4(Not released yet)
For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/Qwen/Qwen2.5-3B-Instruct-GPTQ-Int4
Pulsar2 Link, How to Convert LLM from Huggingface to axmodel
Chips | w8a16 | w4a16 |
---|---|---|
AX650 | 5 tokens/sec | 10 tokens/sec |
Download all files from this repository to the device
root@ax650:/mnt/qtang/llm-test/qwen2.5-3b# tree -L 1
.
βββ qwen2.5-3b-gptq-int4-ax650
βββ qwen2.5_tokenizer
βββ qwen2.5_tokenizer.py
βββ main_axcl_aarch64
βββ main_axcl_x86
βββ main_prefill
βββ post_config.json
βββ run_qwen2.5_3b_gptq_int4_ax650.sh
βββ run_qwen2.5_3b_gptq_int4_axcl_aarch64.sh
βββ run_qwen2.5_3b_gptq_int4_axcl_x86.sh
root@ax650:/mnt/qtang/llm-test/qwen2.5-3b# python qwen2.5_tokenizer.py --port 12345
None None 151645 <|im_end|>
<|im_start|>system
You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>
<|im_start|>user
hello world<|im_end|>
<|im_start|>assistant
[151644, 8948, 198, 2610, 525, 1207, 16948, 11, 3465, 553, 54364, 14817, 13, 1446, 525, 264, 10950, 17847, 13, 151645, 198, 151644, 872, 198, 14990, 1879 , 151645, 198, 151644, 77091, 198]
http://localhost:12345
Open another terminal and run run_qwen2.5_3b_gptq_int4_ax650.sh
root@ax650:/mnt/qtang/llm-test/qwen2.5-3b# ./run_qwen2.5_3b_gptq_int4_ax650.sh
[I][ Init][ 125]: LLM init start
[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | ββββββββββββββββββββββββββββββββ | 39 / 39 [19.30s<19.30s, 2.02 count/s] init post axmodel ok,remain_cmm(1811 MB)
[I][ Init][ 241]: max_token_len : 1024
[I][ Init][ 246]: kv_cache_size : 256, kv_cache_num: 1024
[I][ Init][ 254]: prefill_token_num : 128
[I][ load_config][ 281]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": true,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 10,
"top_p": 0.8
}
[I][ Init][ 268]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
>> who are you
[I][ Run][ 466]: ttft: 545.11 ms
I am Qwen, an artificial intelligence from Alibaba Cloud. I am here to assist you with any information or tasks you might have. How can I assist you today?
[N][ Run][ 605]: hit eos,avg 9.90 token/s
>> 1+1=?
[I][ Run][ 466]: ttft: 545.63 ms
1+1 equals 2.
[N][ Run][ 605]: hit eos,avg 9.85 token/s
What is M.2 Accelerator card?, Show this DEMO based on Raspberry PI 5.
(base) axera@raspberrypi:~/samples/qwen2.5-3b $ ./run_qwen2.5_3b_gptq_int4_axcl_aarch64.sh
build time: Feb 13 2025 15:44:57
[I][ Init][ 111]: LLM init start
100% | ββββββββββββββββββββββββββββββββ | 39 / 39 [37.95s<37.95s, 1.03 count/s] init post axmodel ok remain_cmm(5391 MB)
[I][ Init][ 226]: max_token_len : 1024
[I][ Init][ 231]: kv_cache_size : 256, kv_cache_num: 1024
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": true,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 10,
"top_p": 0.8
}
[I][ Init][ 288]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
>> who are you
I am Qwen, an artificial intelligence from Alibaba Cloud. I am here to assist you with your questions and help in any way I can. How can I assist you today?
[N][ Run][ 610]: hit eos,avg 8.23 token/s
>> 1+1=?
1+1=2
[N][ Run][ 610]: hit eos,avg 8.72 token/s
>> q
(base) axera@raspberrypi:~ $ axcl-smi
+------------------------------------------------------------------------------------------------+
| AXCL-SMI V2.26.0_20250205130139 Driver V2.26.0_20250205130139 |
+-----------------------------------------+--------------+---------------------------------------+
| Card Name Firmware | Bus-Id | Memory-Usage |
| Fan Temp Pwr:Usage/Cap | CPU NPU | CMM-Usage |
|=========================================+==============+=======================================|
| 0 AX650N V2.26.0 | 0000:01:00.0 | 174 MiB / 945 MiB |
| -- 43C -- / -- | 0% 0% | 1973 MiB / 7040 MiB |
+-----------------------------------------+--------------+---------------------------------------+
+------------------------------------------------------------------------------------------------+
| Processes: |
| Card PID Process Name NPU Memory Usage |
|================================================================================================|
| 0 470413 /home/axera/samples/qwen2.5-3b-gptq-int4/main_axcl_aarch64 1963704 KiB |
+------------------------------------------------------------------------------------------------+
Base model
Qwen/Qwen2.5-3B