swordfaith commited on
Commit
e00ba39
1 Parent(s): f62f088

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -1,5 +1,7 @@
1
  # Introduction
2
 
 
 
3
  The MiniCPM-MoE-8x2B is a decoder-only transformer-based generative language model.
4
 
5
  The MiniCPM-MoE-8x2B adopt a Mixture-of-Experts(MoE) architecture, which has 8 experts per layer and activates 2 of 8 experts for each token.
 
1
  # Introduction
2
 
3
+ [OpenBMB Technical Blog Series](https://openbmb.vercel.app/)
4
+
5
  The MiniCPM-MoE-8x2B is a decoder-only transformer-based generative language model.
6
 
7
  The MiniCPM-MoE-8x2B adopt a Mixture-of-Experts(MoE) architecture, which has 8 experts per layer and activates 2 of 8 experts for each token.