File size: 3,213 Bytes
673d516 acd8abb 673d516 acd8abb 673d516 6351081 673d516 6351081 673d516 6351081 673d516 12e33f0 673d516 fb4929b 673d516 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
license: mit
pipeline_tag: image-text-to-text
---
<div align="center">
<img src="https://raw.githubusercontent.com/InternLM/lmdeploy/0be9e7ab6fe9a066cfb0a09d0e0c8d2e28435e58/resources/lmdeploy-logo.svg" width="450"/>
</div>
# INT4 Weight-only Quantization and Deployment (W4A16)
LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.
LMDeploy supports the following NVIDIA GPU for W4A16 inference:
- Turing(sm75): 20 series, T4
- Ampere(sm80,sm86): 30 series, A10, A16, A30, A100
- Ada Lovelace(sm90): 40 series
Before proceeding with the quantization and inference, please ensure that lmdeploy is installed.
```shell
pip install lmdeploy[all]
```
This article comprises the following sections:
<!-- toc -->
- [Inference](#inference)
- [Evaluation](#evaluation)
- [Service](#service)
<!-- tocstop -->
## Inference
For lmdeploy v0.5.0, please configure the chat template config first. Create the following JSON file `chat_template.json`.
```json
{
"model_name":"internlm2",
"meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。",
"stop_words":["<|im_start|>", "<|im_end|>"]
}
```
Trying the following codes, you can perform the batched offline inference with the quantized model:
```python
from lmdeploy import pipeline
from lmdeploy.model import ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL2-2B-AWQ'
chat_template_config = ChatTemplateConfig.from_json('chat_template.json')
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
pipe = pipeline(model, chat_template_config=chat_template_config, log_level='INFO')
response = pipe(('describe this image', image))
print(response)
```
For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md).
## Evaluation
Please overview [this guide](https://opencompass.readthedocs.io/en/latest/advanced_guides/evaluation_turbomind.html) about model evaluation with LMDeploy.
## Service
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
```shell
lmdeploy serve api_server OpenGVLab/InternVL2-2B-AWQ --backend turbomind --model-format awq --chat-template chat_template.json
```
The default port of `api_server` is `23333`. After the server is launched, you can communicate with server on terminal through `api_client`:
```shell
lmdeploy serve api_client http://0.0.0.0:23333
```
You can overview and try out `api_server` APIs online by swagger UI at `http://0.0.0.0:23333`, or you can also read the API specification from [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md).
|