bowenbaoamd haoyang-amd commited on
Commit
b8ef26e
·
verified ·
1 Parent(s): 34d614e

Create README.md (#2)

Browse files

- Create README.md (cc217011b4aebdd649b97d814c206759819f45a9)


Co-authored-by: haoyanli <[email protected]>

Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - mistralai/Mistral-7B-v0.1
4
+ license: apache-2.0
5
+ ---
6
+
7
+
8
+ # Mistral-7B-v0.1-FP8-KV
9
+ - ## Introduction
10
+ This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset.
11
+ - ## Quantization Stragegy
12
+ - ***Quantized Layers***: All linear layers excluding "lm_head"
13
+ - ***Weight***: FP8 symmetric per-tensor
14
+ - ***Activation***: FP8 symmetric per-tensor
15
+ - ***KV Cache***: FP8 symmetric per-tensor
16
+ - ## Quick Start
17
+ 1. [Download and install Quark](https://quark.docs.amd.com/latest/install.html)
18
+ 2. Run the quantization script in the example folder using the following command line:
19
+ ```sh
20
+ export MODEL_DIR = [local model checkpoint folder] or mistralai/Mistral-7B-v0.1
21
+ # single GPU
22
+ python3 quantize_quark.py \
23
+ --model_dir $MODEL_DIR \
24
+ --output_dir Mistral-7B-v0.1-FP8-KV \
25
+ --quant_scheme w_fp8_a_fp8 \
26
+ --kv_cache_dtype fp8 \
27
+ --num_calib_data 128 \
28
+ --model_export quark_safetensors \
29
+ --no_weight_matrix_merge \
30
+ --custom_mode fp8
31
+
32
+
33
+ # If model size is too large for single GPU, please use multi GPU instead.
34
+ python3 quantize_quark.py \
35
+ --model_dir $MODEL_DIR \
36
+ --output_dir Mistral-7B-v0.1-FP8-KV \
37
+ --quant_scheme w_fp8_a_fp8 \
38
+ --kv_cache_dtype fp8 \
39
+ --num_calib_data 128 \
40
+ --model_export quark_safetensors \
41
+ --no_weight_matrix_merge \
42
+ --custom_mode fp8 \
43
+ --multi_gpu
44
+ ```
45
+ ## Deployment
46
+ Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible).
47
+
48
+ ## Evaluation
49
+ Quark currently uses perplexity(PPL) as the evaluation metric for accuracy loss before and after quantization.The specific PPL algorithm can be referenced in the quantize_quark.py.
50
+ The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only.
51
+
52
+ #### Evaluation scores
53
+ <table>
54
+ <tr>
55
+ <td><strong>Benchmark</strong>
56
+ </td>
57
+ <td><strong>Mistral-7B-v0.1</strong>
58
+ </td>
59
+ <td><strong>Mistral-7B-v0.1-FP8-KV(this model)</strong>
60
+ </td>
61
+ </tr>
62
+ <tr>
63
+ <td>Perplexity-wikitext2
64
+ </td>
65
+ <td>5.2526
66
+ </td>
67
+ <td>5.2812
68
+ </td>
69
+ </tr>
70
+
71
+ </table>
72
+
73
+
74
+
75
+ #### License
76
+ Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.
77
+
78
+ Licensed under the Apache License, Version 2.0 (the "License");
79
+ you may not use this file except in compliance with the License.
80
+ You may obtain a copy of the License at
81
+
82
+ http://www.apache.org/licenses/LICENSE-2.0
83
+
84
+ Unless required by applicable law or agreed to in writing, software
85
+ distributed under the License is distributed on an "AS IS" BASIS,
86
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
87
+ See the License for the specific language governing permissions and
88
+ limitations under the License.