yanismiraoui
commited on
Commit
•
6f56497
1
Parent(s):
4409437
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: yi-license
|
4 |
+
license_link: LICENSE
|
5 |
+
---
|
6 |
+
<div align="center">
|
7 |
+
|
8 |
+
<img src="./Yi.svg" width="200px">
|
9 |
+
|
10 |
+
</div>
|
11 |
+
|
12 |
+
## This repo contains a SHARDED version of: https://huggingface.co/01-ai/Yi-6B
|
13 |
+
|
14 |
+
### Huge thanks to the publishers for their amazing work, all credits go to them: https://huggingface.co/01-ai
|
15 |
+
|
16 |
+
## Introduction
|
17 |
+
|
18 |
+
The **Yi** series models are large language models trained from scratch by
|
19 |
+
developers at [01.AI](https://01.ai/). The first public release contains two
|
20 |
+
bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B))
|
21 |
+
and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained
|
22 |
+
with 4K sequence length and can be extended to 32K during inference time.
|
23 |
+
The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
|
24 |
+
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with
|
25 |
+
200K context length.
|
26 |
+
|
27 |
+
## News
|
28 |
+
|
29 |
+
- 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
|
30 |
+
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length.
|
31 |
+
- 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and
|
32 |
+
[`Yi-34B`](https://huggingface.co/01-ai/Yi-34B).
|
33 |
+
|
34 |
+
|
35 |
+
## Model Performance
|
36 |
+
|
37 |
+
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
|
38 |
+
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
|
39 |
+
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
|
40 |
+
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
|
41 |
+
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
|
42 |
+
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
|
43 |
+
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
|
44 |
+
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
|
45 |
+
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
|
46 |
+
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
|
47 |
+
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
|
48 |
+
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
|
49 |
+
| Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
|
50 |
+
| **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
|
51 |
+
| Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
|
52 |
+
|
53 |
+
While benchmarking open-source models, we have observed a disparity between the
|
54 |
+
results generated by our pipeline and those reported in public sources (e.g.
|
55 |
+
OpenCompass). Upon conducting a more in-depth investigation of this difference,
|
56 |
+
we have discovered that various models may employ different prompts,
|
57 |
+
post-processing strategies, and sampling techniques, potentially resulting in
|
58 |
+
significant variations in the outcomes. Our prompt and post-processing strategy
|
59 |
+
remains consistent with the original benchmark, and greedy decoding is employed
|
60 |
+
during evaluation without any post-processing for the generated content. For
|
61 |
+
scores that were not reported by the original authors (including scores reported
|
62 |
+
with different settings), we try to get results with our pipeline.
|
63 |
+
|
64 |
+
To evaluate the model's capability extensively, we adopted the methodology
|
65 |
+
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
|
66 |
+
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
|
67 |
+
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
|
68 |
+
using a 7-shot setup, while all other tests were conducted with a 0-shot
|
69 |
+
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
|
70 |
+
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
|
71 |
+
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
|
72 |
+
is derived by averaging the scores on the remaining tasks. Since the scores for
|
73 |
+
these two tasks are generally lower than the average, we believe that
|
74 |
+
Falcon-180B's performance was not underestimated.
|
75 |
+
|
76 |
+
## Usage
|
77 |
+
|
78 |
+
Please visit our [github repository](https://github.com/01-ai/Yi) for general
|
79 |
+
guidance on how to use this model.
|
80 |
+
|
81 |
+
## Disclaimer
|
82 |
+
|
83 |
+
Although we use data compliance checking algorithms during the training process
|
84 |
+
to ensure the compliance of the trained model to the best of our ability, due to
|
85 |
+
the complexity of the data and the diversity of language model usage scenarios,
|
86 |
+
we cannot guarantee that the model will generate correct and reasonable output
|
87 |
+
in all scenarios. Please be aware that there is still a risk of the model
|
88 |
+
producing problematic outputs. We will not be responsible for any risks and
|
89 |
+
issues resulting from misuse, misguidance, illegal usage, and related
|
90 |
+
misinformation, as well as any associated data security concerns.
|
91 |
+
|
92 |
+
## License
|
93 |
+
|
94 |
+
The Yi series models are fully open for academic research and free commercial
|
95 |
+
usage with permission via applications. All usage must adhere to the [Model
|
96 |
+
License Agreement 2.0](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE). To
|
97 |
+
apply for the official commercial license, please contact us
|
98 |
+
([[email protected]](mailto:[email protected])).
|