Update README.md
Browse files
README.md
CHANGED
@@ -10,9 +10,10 @@ datasets:
|
|
10 |
language:
|
11 |
- zh
|
12 |
- en
|
|
|
13 |
---
|
14 |
# Zephyr-8x7b:Zephyr Models but Mixtral 8x7B
|
15 |
|
16 |
We present to you the Zephyr-8x7b, a Mixtral 8x7B MoE model that SFT-only training on a dataset of nearly four million conversation corpora.
|
17 |
|
18 |
-
It has demonstrated strong contextual understanding, reasoning, and human moral alignment without alignment like DPO, and we invite you to participate in our exploration!
|
|
|
10 |
language:
|
11 |
- zh
|
12 |
- en
|
13 |
+
pipeline_tag: conversational
|
14 |
---
|
15 |
# Zephyr-8x7b:Zephyr Models but Mixtral 8x7B
|
16 |
|
17 |
We present to you the Zephyr-8x7b, a Mixtral 8x7B MoE model that SFT-only training on a dataset of nearly four million conversation corpora.
|
18 |
|
19 |
+
It has demonstrated strong contextual understanding, reasoning, and human moral alignment without alignment like DPO, and we invite you to participate in our exploration!
|