Text Generation
Transformers
Safetensors
English
doge
conversational
custom_code
JingzeShi commited on
Commit
73d29c8
·
verified ·
1 Parent(s): d23c561

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -19,18 +19,18 @@ pipeline_tag: text-generation
19
  <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
20
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
21
  </a>
22
- <a href="https://github.com/SamllDoge/small-doge" target="_blank" style="margin: 2px;">
23
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
24
  </a>
25
  <a href="https://huggingface.co/SmallDoge" target="_blank" style="margin: 2px;">
26
  <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-SmallDoge-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
27
  </a>
28
- <a href="https://github.com/SamllDoge/small-doge/blob/main/LICENSE" style="margin: 2px;">
29
  <img alt="License" src="https://img.shields.io/badge/License-Apache--2.0-blue.svg" style="display: inline-block; vertical-align: middle;"/>
30
  </a>
31
  </div>
32
 
33
- Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by [SmallDoge](https://huggingface.co/SmallDoge) community, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), all training details and code are publicly available on the [small-doge](https://github.com/SamllDoge/small-doge) repository.
34
 
35
 
36
  ## Uses
 
19
  <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
20
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
21
  </a>
22
+ <a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
23
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
24
  </a>
25
  <a href="https://huggingface.co/SmallDoge" target="_blank" style="margin: 2px;">
26
  <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-SmallDoge-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
27
  </a>
28
+ <a href="https://github.com/SmallDoges/small-doge/blob/main/LICENSE" style="margin: 2px;">
29
  <img alt="License" src="https://img.shields.io/badge/License-Apache--2.0-blue.svg" style="display: inline-block; vertical-align: middle;"/>
30
  </a>
31
  </div>
32
 
33
+ Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by [SmallDoge](https://huggingface.co/SmallDoge) community, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), all training details and code are publicly available on the [small-doge](https://github.com/SmallDoges/small-doge) repository.
34
 
35
 
36
  ## Uses