|
--- |
|
license: mit |
|
datasets: |
|
- vicgalle/alpaca-gpt4 |
|
- BelleGroup/train_1M_CN |
|
- stingning/ultrachat |
|
- HuggingFaceH4/no_robots |
|
- Open-Orca/OpenOrca |
|
language: |
|
- zh |
|
- en |
|
pipeline_tag: conversational |
|
tags: |
|
- Mistral |
|
--- |
|
# Zephyr-8x7b:Zephyr Models but Mixtral 8x7B |
|
|
|
We present to you the Zephyr-8x7b, a Mixtral 8x7B MoE model that SFT-only training on a dataset of nearly four million conversation corpora. |
|
|
|
It has demonstrated strong contextual understanding, reasoning, and human moral alignment without alignment techniques like DPO, and we invite you to participate in our exploration! |