munish0838
commited on
Commit
•
695c17e
1
Parent(s):
2d8ad9c
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- generated_from_trainer
|
4 |
+
model-index:
|
5 |
+
- name: zephyr-7b-alpha
|
6 |
+
results : []
|
7 |
+
license: mit
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
base_model: HuggingFaceH4/zephyr-7b-alpha
|
11 |
+
---
|
12 |
+
|
13 |
+
# Zephyr 7B Alpha-GGUF
|
14 |
+
|
15 |
+
- Model creator: [Hugging Face H4](https://huggingface.co/HuggingFaceH4)
|
16 |
+
- Original model: [Zephyr 7B Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
|
17 |
+
|
18 |
+
## Description
|
19 |
+
|
20 |
+
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. You can find more details in the [technical report](https://arxiv.org/abs/2310.16944).
|
21 |
+
|
22 |
+
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
|
23 |
+
- **Language(s) (NLP):** Primarily English
|
24 |
+
- **License:** MIT
|
25 |
+
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|