File size: 9,489 Bytes
1245250
 
 
 
 
 
 
 
3a54a01
1245250
3a54a01
 
 
1245250
 
 
 
 
5f2f215
1245250
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
---
datasets:
- NeelNanda/pile-10k
base_model:
- deepseek-ai/DeepSeek-V3
---
## Model Details

This model is an int4 model with group_size 128 and symmetric quantization of [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.

Loading the model in Transformers can be quite slow.

Please follow the license of the original model.
## How to Use

### Requirements

```bash
pip install auto-round>=0.4.4
pip install intel-extension-for-transformers
```

**INT4 Inference on CPU**

~~~python
from auto_round import AutoRoundConfig  ##must import for autoround format
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

quantized_model_dir = "OPEA/DeepSeek-V3-int4-sym-awq-inc-cpu"

quantization_config = AutoRoundConfig(
    backend="cpu"
)

model = AutoModelForCausalLM.from_pretrained(
    quantized_model_dir,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="cpu",
    revision="16eb0b2",##auto-round format, the only difference is config.json
    quantization_config=quantization_config,  ##cpu only machine does not set this

)

tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True)
prompts = [
    "9.11和9.8哪个数字大",
    "strawberry中有几个r?",
    "How many r in strawberry.",
    "There is a girl who likes adventure,",
    "Please give a brief introduction of DeepSeek company.",
    "hello"

]

texts=[]
for prompt in prompts:
    messages = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": prompt}
    ]
    text = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    texts.append(text)
inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)

outputs = model.generate(
    input_ids=inputs["input_ids"].to(model.device),
    attention_mask=inputs["attention_mask"].to(model.device),
    max_length=512,
    num_return_sequences=1, 
    do_sample=False
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs)
]

decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)

for i, prompt in enumerate(prompts):
    input_id = inputs
    print(f"Prompt: {prompt}")
    print(f"Generated: {decoded_outputs[i]}")
    print("-" * 50)

"""
Prompt: 9.11和9.8哪个数字大
Generated: 要比较 **9.11****9.8** 的大小,可以按照以下步骤进行:

1. **比较整数部分**   - 两个数的整数部分都是 **9**,所以整数部分相同。

2. **比较小数部分**   - **9.11** 的小数部分是 **0.11**
   - **9.8** 的小数部分是 **0.8**

3. **统一小数位数**   -**0.8** 转换为 **0.80**,以便于比较。

4. **比较小数部分**   - **0.80** 大于 **0.11**

因此,**9.8** 大于 **9.11**最终答案:\boxed{9.8}
--------------------------------------------------
Prompt: strawberry中有几个r?
Generated: ### 第一步:理解问题

首先,我需要明确问题的含义。问题是:“strawberry中有几个r?”。这里的“strawberry”是一个英文单词,意思是“草莓”。问题是在问这个单词中有多少个字母“r”。

### 第二步:分解单词

为了找出“strawberry”中有多少个“r”,我需要将这个单词分解成单个字母。让我们逐个字母来看:

s - t - r - a - w - b - e - r - r - y
### 第三步:数“r”的数量

现在,我将逐个检查这些字母,找出“r”的数量。

1. 第一个字母是 **s**,不是“r”。
2. 第二个字母是 **t**,不是“r”。
3. 第三个字母是 **r**,这是一个“r”。
4. 第四个字母是 **a**,不是“r”。
5. 第五个字母是 **w**,不是“r”。
6. 第六个字母是 **b**,不是“r”。
7. 第七个字母是 **e**,不是“r”。
8. 第八个字母是 **r**,这是一个“r”。
9. 第九个字母是 **r**,这也是一个“r”。
10. 第十个字母是 **y**,不是“r”。

### 第四步:总结“r”的数量

通过上述步骤,我发现“strawberry”中有三个“r”。它们分别出现在第三、第八和第九个位置。

### 验证过程

为了确保我的计算正确,我可以再次检查一遍:

- 第三个字母:r
- 第八个字母:r
- 第九个字母:r

确实有三个“r”。
### 最终答案

“strawberry”这个单词中有 **3** 个字母“r”。
--------------------------------------------------
Prompt: How many r in strawberry.
Generated: The word "strawberry" contains **3** instances of the letter "r".
--------------------------------------------------
Prompt: There is a girl who likes adventure,
Generated: That’s wonderful! A girl who loves adventure is likely curious, brave, and eager to explore the world around her. Here are some ideas to fuel her adventurous spirit:

### **Outdoor Adventures**
- **Hiking:** Explore local trails, national parks, or mountains.
- **Camping:** Spend a night under the stars and connect with nature.
- **Rock Climbing:** Challenge herself with bouldering or climbing walls.
- **Kayaking or Canoeing:** Paddle through rivers, lakes, or even the ocean.
- **Zip-lining:** Soar through the treetops for an adrenaline rush.

### **Travel and Exploration**
- **Road Trips:** Plan a journey to new cities or scenic destinations.
- **Backpacking:** Travel light and explore different cultures or landscapes.
- **Volunteer Abroad:** Combine adventure with meaningful work in a new country.

### **Creative and Intellectual Adventures**
- **Geocaching:** A real-world treasure hunt using GPS coordinates.
- **Photography:** Capture the beauty of her adventures through a lens.
- **Learning New Skills:** Try something daring like surfing, scuba diving, or paragliding.
### **Immersive Experiences**
- **Theme Parks:** Enjoy thrilling rides and attractions.
- **Escape Rooms:** Solve puzzles and mysteries in a timed challenge.
- **Wildlife Safaris:** Observe animals in their natural habitats.

### **Books and Inspiration**
- **Adventure Novels:** Read stories about explorers, adventurers, and daring quests.
- **Documentaries:** Watch films about extreme sports, travel, or nature.

### **Personal Challenges**
- **Set Goals:** Create a bucket list of adventures she wants to experience.
- **Push Limits:** Try something outside her comfort zone, like skydiving or bungee jumping.

Encourage her to embrace the unknown, stay curious, and always seek new experiences. Adventure is not just about the destination but the journey and the stories she’ll create along the way! 🌟
--------------------------------------------------
Prompt: Please give a brief introduction of DeepSeek company.
Generated: DeepSeek Artificial Intelligence Co., Ltd. (referred to as "DeepSeek" or "深度求索") , founded in 2023, is a Chinese company dedicated to making AGI a reality.
--------------------------------------------------
Prompt: hello
Generated: Hello! How can I assist you today? 😊
"""
~~~

### Generate the model

**5*80G gpu is needed(could optimize), 1.4T cpu memory is needed**

We discovered that the inputs and outputs of certain layers in this model are very large and even exceed the FP16 range when tested with a few prompts. It is recommended to exclude these layers from quantization—particularly the 'down_proj' in layer 60—and run them using BF16 precision instead. However, we have not implemented this in this int4 model as in cpu, the compute dtype for int4 is bf16 or FP32.

~~~python
model.layers.60.mlp.experts.150.down_proj tensor(1144.) tensor(2122.9451)
model.layers.60.mlp.experts.231.down_proj tensor(25856.) tensor(12827.9980)
model.layers.60.mlp.shared_experts.down_proj tensor(1880.) tensor(3156.7344)
model.layers.60.mlp.experts.81.down_proj tensor(4416.) tensor(6124.6846)
model.layers.60.mlp.experts.92.down_proj tensor(107520.) tensor(50486.0781)
model.layers.59.mlp.experts.138.down_proj tensor(1568.) tensor(190.8769)
model.layers.60.mlp.experts.81.down_proj tensor(7360.) tensor(10024.4531)
model.layers.60.mlp.experts.92.down_proj tensor(116224.) tensor(55192.4180)

~~~

**1 add meta data to bf16 model** https://huggingface.co/opensourcerelease/DeepSeek-V3-bf16

~~~python
import safetensors
from safetensors.torch import save_file
 
for i in range(1, 164):
    idx_str = "0" * (5-len(str(i))) + str(i)
    safetensors_path = f"model-{idx_str}-of-000163.safetensors"
    print(safetensors_path)
    tensors = dict()
    with safetensors.safe_open(safetensors_path, framework="pt") as f:
        for key in f.keys():
            tensors[key] = f.get_tensor(key)
    save_file(tensors, safetensors_path, metadata={'format': 'pt'})
~~~



**2 replace the  modeling_deepseek.py with the following file**, basically align device and remove torch.no_grad as we need some tuning in AutoRound. 

https://github.com/intel/auto-round/blob/deepseekv3/modeling_deepseek.py



**3   tuning**

```bash
git clone https://github.com/intel/auto-round.git && cd auto-round && git checkout deepseekv3
```

```bash
python3 -m auto_round --model  "/models/DeepSeek-V3-bf16/"  --group_size 128 --format "auto_awq"  --iters 200 --devices 0,1,2,3,4 --nsamples 512 --batch_size 4 --seqlen 2048   --low_gpu_mem_usage    --output_dir "tmp_autoround"  --disable_eval e 2>&1 | tee -a seekv3.txt
```