Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ license_link: >-
|
|
14 |
<div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div>
|
15 |
|
16 |
<p align="center">
|
17 |
-
π€ <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a> β’ π€ <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> β’ πΎ <a href="https://wisemodel.cn/organization/Skywork" target="_blank">Wisemodel</a> β’ π¬ <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>β’ π<a href="
|
18 |
</p>
|
19 |
|
20 |
<div align="center">
|
@@ -52,8 +52,8 @@ Skywork-MoE demonstrates comparable or superior performance to models with more
|
|
52 |
|
53 |
| | HuggingFace Model | ModelScope Model | Wisemodel Model |
|
54 |
|:-------:|:-----------:|:-----------------------------:|:-----------------------------:|
|
55 |
-
| **Skywork-MoE-base** | π€ [Skywork-MoE-base](https://
|
56 |
-
|
57 |
|
58 |
# Benchmark Results
|
59 |
We evaluated Skywork-MoE-base model on various popular benchmarks, including C-Eval, MMLU, CMMLU, GSM8K, MATH and HumanEval.
|
@@ -90,6 +90,10 @@ print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
|
|
90 |
|
91 |
```
|
92 |
|
|
|
|
|
|
|
|
|
93 |
|
94 |
# Demonstration of vLLM Model Inference
|
95 |
|
|
|
14 |
<div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div>
|
15 |
|
16 |
<p align="center">
|
17 |
+
π€ <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a> β’ π€ <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> β’ πΎ <a href="https://wisemodel.cn/organization/Skywork" target="_blank">Wisemodel</a> β’ π¬ <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>β’ π<a href="https://github.com/SkyworkAI/Skywork-MoE/blob/main/skywork-moe-tech-report.pdf" target="_blank">Tech Report</a>
|
18 |
</p>
|
19 |
|
20 |
<div align="center">
|
|
|
52 |
|
53 |
| | HuggingFace Model | ModelScope Model | Wisemodel Model |
|
54 |
|:-------:|:-----------:|:-----------------------------:|:-----------------------------:|
|
55 |
+
| **Skywork-MoE-base** | π€ [Skywork-MoE-base](https://github.com/SkyworkAI/Skywork-MoE) | π€[Skywork-MoE-base](https://www.modelscope.cn/models/skywork/Skywork-MoE-base) | πΎ[Skywork-MoE-base](https://wisemodel.cn/models/Skywork/Skywork-MoE-base) |
|
56 |
+
| **Skywork-MoE-Base-FP8** | π€ [Skywork-MoE-Base-FP8](https://github.com/SkyworkAI/Skywork-MoE) | π€ | πΎ |
|
57 |
|
58 |
# Benchmark Results
|
59 |
We evaluated Skywork-MoE-base model on various popular benchmarks, including C-Eval, MMLU, CMMLU, GSM8K, MATH and HumanEval.
|
|
|
90 |
|
91 |
```
|
92 |
|
93 |
+
## Chat Model Inference
|
94 |
+
|
95 |
+
comming soon...
|
96 |
+
|
97 |
|
98 |
# Demonstration of vLLM Model Inference
|
99 |
|