zhaode commited on
Commit
8b16d30
1 Parent(s): eca9526

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +38 -1
  2. llm.mnn.json +0 -0
README.md CHANGED
@@ -9,5 +9,42 @@ tags:
9
  # Qwen2.5-Math-7B-Instruct-MNN
10
 
11
  ## Introduction
 
12
 
13
- This model is a 4-bit quantized version of the MNN model exported from Qwen2.5-Math-7B-Instruct using [llm-export](https://github.com/wangzhaode/llm-export).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  # Qwen2.5-Math-7B-Instruct-MNN
10
 
11
  ## Introduction
12
+ This model is a 4-bit quantized version of the MNN model exported from [Qwen2.5-Math-7B-Instruct](https://modelscope.cn/models/qwen/Qwen2.5-Math-7B-Instruct/summary) using [llmexport](https://github.com/alibaba/MNN/tree/master/transformers/llm/export).
13
 
14
+ ## Download
15
+ ```bash
16
+ # install huggingface
17
+ pip install huggingface
18
+ ```
19
+ ```bash
20
+ # shell download
21
+ huggingface download --model 'taobao-mnn/Qwen2.5-Math-7B-Instruct-MNN' --local_dir 'path/to/dir'
22
+ ```
23
+ ```python
24
+ # SDK download
25
+ from huggingface_hub import snapshot_download
26
+ model_dir = snapshot_download('taobao-mnn/Qwen2.5-Math-7B-Instruct-MNN')
27
+ ```
28
+
29
+ ```bash
30
+ # git clone
31
+ git clone https://www.modelscope.cn/taobao-mnn/Qwen2.5-Math-7B-Instruct-MNN
32
+ ```
33
+
34
+ ## Usage
35
+ ```bash
36
+ # clone MNN source
37
+ git clone https://github.com/alibaba/MNN.git
38
+
39
+ # compile
40
+ cd MNN
41
+ mkdir build && cd build
42
+ cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true
43
+ make -j
44
+
45
+ # run
46
+ ./llm_demo /path/to/Qwen2.5-Math-7B-Instruct-MNN/config.json prompt.txt
47
+ ```
48
+
49
+ ## Document
50
+ [MNN-LLM](https://mnn-docs.readthedocs.io/en/latest/transformers/llm.html#)
llm.mnn.json CHANGED
The diff for this file is too large to render. See raw diff