unsubscribe commited on
Commit
5ce4e49
1 Parent(s): 9359cc2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -1,6 +1,10 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
4
  # INT4 Weight-only Quantization and Deployment (W4A16)
5
 
6
  LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.
 
1
  ---
2
  license: mit
3
  ---
4
+ <div align="center">
5
+ <img src="https://raw.githubusercontent.com/InternLM/lmdeploy/0be9e7ab6fe9a066cfb0a09d0e0c8d2e28435e58/resources/lmdeploy-logo.svg" width="450"/>
6
+ </div>
7
+
8
  # INT4 Weight-only Quantization and Deployment (W4A16)
9
 
10
  LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.