pvduy commited on
Commit
bc7e10c
1 Parent(s): b296c96

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ datasets:
4
+ - HuggingFaceH4/ultrachat_200k
5
+ - HuggingFaceH4/ultrafeedback_binarized
6
+ - meta-math/MetaMathQA
7
+ - WizardLM/WizardLM_evol_instruct_V2_196k
8
+
9
+ language:
10
+ - en
11
+ tags:
12
+ - causal-lm
13
+ extra_gated_fields:
14
+ Name: text
15
+ Email: text
16
+ Country: text
17
+ Organization or Affiliation: text
18
+ I ALLOW Stability AI to email me about new model releases: checkbox
19
+ ---
20
+ # `Stable Zephyr 3B`
21
+
22
+ ## Model Description
23
+
24
+ `Stable Zephyr 3B` is a 3 billion parameter instruction tuned inspired by [HugginFaceH4's Zephyr 7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) pipeline that was trained on a mix of publicly available datasets, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290), evaluation for this model based on
25
+ [MT Bench](https://tatsu-lab.github.io/alpaca_eval/) and [Alpaca Benchmark](https://tatsu-lab.github.io/alpaca_eval/)
26
+
27
+ ## Usage
28
+
29
+ Get started generating text with `Stable Zephyr 3B` by using the following code snippet:
30
+
31
+ ```python
32
+ from transformers import AutoModelForCausalLM, AutoTokenizer
33
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t")
34
+ model = AutoModelForCausalLM.from_pretrained(
35
+ "stabilityai/stable_zephyr_3b",
36
+ trust_remote_code=True,
37
+ torch_dtype="auto",
38
+ )
39
+ model.cuda()
40
+ prompt = f"<|user|>\nIn the field of quantum physics, what is superposition, and how does it relate to the phenomenon of quantum entanglement?<|endoftext|>\n<|assistant|>\n"
41
+ inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to("cuda")
42
+ tokens = model.generate(
43
+ **inputs,
44
+ max_new_tokens=1024,
45
+ temperature=0.7,
46
+ top_p=0.95,
47
+ do_sample=True,
48
+ )
49
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
50
+ ```
51
+
52
+ ## Model Details
53
+
54
+ * **Developed by**: [Stability AI](https://stability.ai/)
55
+ * **Model type**: `Stable Zephyr 3B` models are auto-regressive language models based on the transformer decoder architecture.
56
+ * **Language(s)**: English
57
+ * **Library**: [Alignment Handbook](https://github.com/huggingface/alignment-handbook.git)
58
+ * **Finetuned from model**: [stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t)
59
+ * **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
60
+ * **Contact**: For questions and comments about the model, please email `[email protected]`
61
+
62
+ ### Training Dataset
63
+
64
+ The dataset is comprised of a mixture of open datasets large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets):
65
+ - HuggingFaceH4/ultrachat_200k
66
+ - HuggingFaceH4/ultrafeedback_binarized
67
+ - meta-math/MetaMathQA
68
+ - Capybara
69
+ - Instruct Code Dataset (Internal)
70
+ - Wizard Dataset
71
+
72
+ ### Training Procedure
73
+
74
+ ## Performance
75
+
76
+ At the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks:
77
+
78
+ | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
79
+ |-------------|-----|----|---------------|--------------|
80
+ | **Stable Zephyr 3B** 🪁 | 3B | DPO | 6.86 | 75.19 |
81
+ | Stable Zephyr (SFT only) | 3B | SFT | 7.12 | 71.15 |
82
+ | MPT-Chat | 7B |dSFT |5.42| -|
83
+ | Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83|
84
+ | Mistral-Instructv0.1 | 7B| - | 6.84 |-|
85
+ | Zephyr-7b-α |7B| dDPO| 6.88| -|
86
+ | Zephyr-7b-β| 7B | dDPO | 7.34 | 90.60 |
87
+ | Falcon-Instruct | 40B |dSFT |5.17 |45.71|
88
+ | Guanaco | 65B | SFT |6.41| 71.80|
89
+ | Llama2-Chat | 70B |RLHF |6.86| 92.66|
90
+ | Vicuna v1.3 | 33B |dSFT |7.12 |88.99|
91
+ | WizardLM v1.0 | 70B |dSFT |7.71 |-|
92
+ | Xwin-LM v0.1 | 70B |dPPO |- |95.57|
93
+ | GPT-3.5-turbo | - |RLHF |7.94 |89.37|
94
+ | Claude 2 | - |RLHF |8.06| 91.36|
95
+ | GPT-4 | -| RLHF |8.99| 95.28|
96
+
97
+ ### Training Infrastructure
98
+
99
+ * **Hardware**: `Stable Zephyr 3B` was trained on the Stability AI cluster across 8 nodes with 8 A100 80GBs GPUs for each nodes.
100
+ * **Code Base**: We use our internal script for SFT steps and used [HuggingFace Alignment Handbook script](https://github.com/huggingface/alignment-handbook) for DPO training.
101
+ ## Use and Limitations
102
+
103
+ ### Intended Use
104
+
105
+ The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications.
106
+
107
+ ### Limitations and Bias
108
+
109
+ As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.