pharaouk commited on
Commit
858d90f
1 Parent(s): a74210b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -3,13 +3,14 @@ license: other
3
  license_name: microsoft-research-license
4
  license_link: LICENSE
5
  ---
 
 
6
 
7
  (THIS IS MICROSOFT'S ORIGINAL MODEL, UPLOADED HERE ONLY FOR RESEARCH PURPOSES AND ACCESSIBILITY AS THE AI AZURE STUDIO IS NOT CONVENIENT FOR RESEARCH. RESEARCH ONLY. RESEARCH. RESEARCH, PLEASE DONT SUE US MSFT, THIS IS 100% FOR RESEARCH.)
8
 
9
 
10
  **Here is Microsoft's official Phi-2 repo:** https://huggingface.co/microsoft/phi-2
11
 
12
- Microsoft Phi-2
13
  The phi-2 is a language model with 2.7 billion parameters. The phi-2 model was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, the phi-2 showcased a nearly state-of-the-art performance among models with less than 10 billion parameters.
14
 
15
  Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
 
3
  license_name: microsoft-research-license
4
  license_link: LICENSE
5
  ---
6
+ # **Microsoft Phi-2** (FP32)
7
+
8
 
9
  (THIS IS MICROSOFT'S ORIGINAL MODEL, UPLOADED HERE ONLY FOR RESEARCH PURPOSES AND ACCESSIBILITY AS THE AI AZURE STUDIO IS NOT CONVENIENT FOR RESEARCH. RESEARCH ONLY. RESEARCH. RESEARCH, PLEASE DONT SUE US MSFT, THIS IS 100% FOR RESEARCH.)
10
 
11
 
12
  **Here is Microsoft's official Phi-2 repo:** https://huggingface.co/microsoft/phi-2
13
 
 
14
  The phi-2 is a language model with 2.7 billion parameters. The phi-2 model was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, the phi-2 showcased a nearly state-of-the-art performance among models with less than 10 billion parameters.
15
 
16
  Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.