jon-tow commited on
Commit
4544701
1 Parent(s): e872bb1

fix: remove stale OpenVino mention

Browse files
Files changed (1) hide show
  1. README.md +1 -6
README.md CHANGED
@@ -23,8 +23,7 @@ license: other
23
 
24
  ## Model Description
25
 
26
- `StableLM 2 Zephyr 1.6B` is a 1.6 billion parameter instruction tuned inspired by [Stablelm Zephyr 3B](https://huggingface.co/stabilityai/stablelm-zephyr-3b) training pipeline this model was trained on a mix of publicly available datasets, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290), evaluation for this model based on
27
- [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench).
28
 
29
  ## Usage
30
 
@@ -65,11 +64,8 @@ tokens = model.generate(
65
  print(tokenizer.decode(tokens[0], skip_special_tokens=False))
66
  ```
67
 
68
- You can also see how to run a performance optimized version of this model [here](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/273-stable-zephyr-3b-chatbot/273-stable-zephyr-3b-chatbot.ipynb) using [OpenVINO](https://docs.openvino.ai/2023.2/home.html) from Intel.
69
-
70
  ## Model Details
71
 
72
-
73
  * **Developed by**: [Stability AI](https://stability.ai/)
74
  * **Model type**: `StableLM 2 Zephyr 1.6B` model is an auto-regressive language model based on the transformer decoder architecture.
75
  * **Language(s)**: English
@@ -109,7 +105,6 @@ The dataset is comprised of a mixture of open datasets large-scale datasets avai
109
  | phi-2 | 2.7B | 4.29 |
110
  | TinyLlama-1.1B-Chat-v1.0| 1.1B | 3.46 |
111
 
112
-
113
  ### OpenLLM Leaderboard
114
 
115
  | Model | Size | Average | ARC Challenge (acc_norm) | HellaSwag (acc_norm) | MMLU (acc_norm) | TruthfulQA (mc2) | Winogrande (acc) | Gsm8k (acc) |
 
23
 
24
  ## Model Description
25
 
26
+ `Stable LM 2 Zephyr 1.6B` is a 1.6 billion parameter instruction tuned language model inspired by [HugginFaceH4's Zephyr 7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) training pipeline. The model is trained on a mix of publicly available datasets and synthetic datasets, utilizing [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We assess model performance on [MT Bench](https://tatsu-lab.github.io/alpaca_eval/) and [Alpaca Benchmark](https://tatsu-lab.github.io/alpaca_eval/).
 
27
 
28
  ## Usage
29
 
 
64
  print(tokenizer.decode(tokens[0], skip_special_tokens=False))
65
  ```
66
 
 
 
67
  ## Model Details
68
 
 
69
  * **Developed by**: [Stability AI](https://stability.ai/)
70
  * **Model type**: `StableLM 2 Zephyr 1.6B` model is an auto-regressive language model based on the transformer decoder architecture.
71
  * **Language(s)**: English
 
105
  | phi-2 | 2.7B | 4.29 |
106
  | TinyLlama-1.1B-Chat-v1.0| 1.1B | 3.46 |
107
 
 
108
  ### OpenLLM Leaderboard
109
 
110
  | Model | Size | Average | ARC Challenge (acc_norm) | HellaSwag (acc_norm) | MMLU (acc_norm) | TruthfulQA (mc2) | Winogrande (acc) | Gsm8k (acc) |