francislabounty
commited on
Commit
•
66d34a3
1
Parent(s):
2847e85
Update README.md
Browse files
README.md
CHANGED
@@ -70,7 +70,9 @@ print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
|
|
70 |
|
71 |
## Other Information
|
72 |
Paper reference: [Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks](https://arxiv.org/abs/2401.02731)
|
|
|
73 |
[Original Paper repo](https://github.com/wuhy68/Parameter-Efficient-MoE)
|
|
|
74 |
[Forked repo with mistral support (sparsetral)](https://github.com/serp-ai/Parameter-Efficient-MoE)
|
75 |
|
76 |
If you are interested in faster inferencing, check out our [fork of vLLM](https://github.com/serp-ai/vllm) that adds sparsetral support
|
|
|
70 |
|
71 |
## Other Information
|
72 |
Paper reference: [Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks](https://arxiv.org/abs/2401.02731)
|
73 |
+
|
74 |
[Original Paper repo](https://github.com/wuhy68/Parameter-Efficient-MoE)
|
75 |
+
|
76 |
[Forked repo with mistral support (sparsetral)](https://github.com/serp-ai/Parameter-Efficient-MoE)
|
77 |
|
78 |
If you are interested in faster inferencing, check out our [fork of vLLM](https://github.com/serp-ai/vllm) that adds sparsetral support
|