ChloeAuYeung
commited on
Commit
•
2f5c3c2
1
Parent(s):
89c10b1
Update model architecture and additional pre-training data information
Browse files
README.md
CHANGED
@@ -8,10 +8,12 @@ inference: false
|
|
8 |
# XVERSE-65B
|
9 |
|
10 |
## 更新信息
|
|
|
11 |
**[2023/11/24]** 更新预训练数据的相关信息。
|
12 |
**[2023/11/06]** 发布 65B 尺寸的 XVERSE-65B 底座模型。
|
13 |
|
14 |
## Update Information
|
|
|
15 |
**[2023/11/24]** Update the related information of the pre-training data.
|
16 |
**[2023/11/06]** Released the XVERSE-65B base model.
|
17 |
|
@@ -24,6 +26,14 @@ inference: false
|
|
24 |
- **分词**:基于 BPE(Byte-Pair Encoding)算法,使用上百 GB 语料训练了一个词表大小为 100,534 的分词器,能够同时支持多语言,而无需额外扩展词表。
|
25 |
- **训练框架**:训练中采用 FlashAttention2 加速计算,3D 并行基础上采用虚拟流水线(virtual pipeline)技术,降低较长流水线和 16k 上下文窗口产生的过高气泡率,在千卡集群的峰值算力利用率达到业界前列。同时通过集群基础设施运营、资源调度、训练框架和调度平台协同等持续优化,打造出高稳定、低中断、强容错的训练系统,将每周有效训练率提升至 98.6%。
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
在预训练阶段,**XVERSE-65B** 主要使用了 7 类不同的数据类型。以下表格展示了 XVERSE-65B 与其他一些知名模型在预训练数据集方面的比较:
|
28 |
|
29 |
| 数据类别 | [GPT3](https://arxiv.org/abs/2005.14165) | [Llama](https://arxiv.org/abs/2302.13971) | [BLOOM](https://arxiv.org/abs/2211.05100) | [PaLM](https://arxiv.org/abs/2204.02311) | [Chinchilla](https://arxiv.org/abs/2203.15556) | [Gopher](https://arxiv.org/abs/2112.11446) | [MT-NLG](https://arxiv.org/abs/2201.11990) | XVERSE-65B |
|
@@ -42,6 +52,33 @@ inference: false
|
|
42 |
|:-------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
|
43 |
| 比例(%) | 72.91 | 7.09 | 4.81 | 5.62 | 6.55 | 1.15 | 1.87 |
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
## Model Introduction
|
46 |
|
47 |
**XVERSE-65B** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology. The models released this time is the base model **XVERSE-65B**. Its key features are as follows:
|
@@ -51,6 +88,14 @@ inference: false
|
|
51 |
- **Tokenization**: Based on the BPE (Byte-Pair Encoding) algorithm, a tokenizer with a vocabulary size of 100,534 has been trained using hundreds of gigabytes of language data. This tokenizer is capable of supporting multilingual without the need for additional vocabulary expansion.
|
52 |
- **Training Framework**: The training utilizes FlashAttention2 for accelerated computation, and on top of 3D parallelism, virtual pipeline technology is applied to reduce the excessive bubble rate caused by longer pipelines and 16k context windows. This achieves a peak computational efficiency within the industry-leading range in the petaflop-scale cluster. Concurrently, through continuous optimization of cluster infrastructure operations, resource scheduling, training frameworks, and the scheduling platform, a highly stable, low-interruption, and robust fault-tolerant training system has been developed, enhancing the effective weekly training rate to 98.6%.
|
53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
During the pre-training phase, **XVERSE-65B** primarily utilized 7 different types of data. The following table shows a comparison of the pre-training datasets of XVERSE-65B with some other well-known models:
|
55 |
|
56 |
| Data Type | [GPT3](https://arxiv.org/abs/2005.14165) | [Llama](https://arxiv.org/abs/2302.13971) | [BLOOM](https://arxiv.org/abs/2211.05100) | [PaLM](https://arxiv.org/abs/2204.02311) | [Chinchilla](https://arxiv.org/abs/2203.15556) | [Gopher](https://arxiv.org/abs/2112.11446) | [MT-NLG](https://arxiv.org/abs/2201.11990) | XVERSE-65B |
|
@@ -69,6 +114,33 @@ The sampling ratios of different data types during the pre-training phase are as
|
|
69 |
|:--------------:|:---------:|:----:|:------------:|:-----:|:---------------:|:----:|:-----:|
|
70 |
| Proportion (%) | 72.91 | 7.09 | 4.81 | 5.62 | 6.55 | 1.15 | 1.87 |
|
71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
## 评测结果
|
73 |
|
74 |
为了综合评估模型的性能,我们在一系列标准数据集上进行了全面测试,包括C-Eval、CMMLU、Gaokao-Bench、MMLU、GAOKAO-English、AGIEval、RACE-M、CommonSenseQA、PIQA、GSM8K和HumanEval。这些评估覆盖了模型在多个领域的能力,具体包括中文问答、英文问答、语言理解、常识问答、逻辑推理、数学问题解答以及编程能力。评估结果如下:
|
|
|
8 |
# XVERSE-65B
|
9 |
|
10 |
## 更新信息
|
11 |
+
**[2023/11/29]** 更新模型架构及更多底座数据的相关信息。
|
12 |
**[2023/11/24]** 更新预训练数据的相关信息。
|
13 |
**[2023/11/06]** 发布 65B 尺寸的 XVERSE-65B 底座模型。
|
14 |
|
15 |
## Update Information
|
16 |
+
**[2023/11/29]** Update model architecture and additional pre-training data information.
|
17 |
**[2023/11/24]** Update the related information of the pre-training data.
|
18 |
**[2023/11/06]** Released the XVERSE-65B base model.
|
19 |
|
|
|
26 |
- **分词**:基于 BPE(Byte-Pair Encoding)算法,使用上百 GB 语料训练了一个词表大小为 100,534 的分词器,能够同时支持多语言,而无需额外扩展词表。
|
27 |
- **训练框架**:训练中采用 FlashAttention2 加速计算,3D 并行基础上采用虚拟流水线(virtual pipeline)技术,降低较长流水线和 16k 上下文窗口产生的过高气泡率,在千卡集群的峰值算力利用率达到业界前列。同时通过集群基础设施运营、资源调度、训练框架和调度平台协同等持续优化,打造出高稳定、低中断、强容错的训练系统,将每周有效训练率提升至 98.6%。
|
28 |
|
29 |
+
**XVERSE-65B**的模型大小、架构和学习率如下:
|
30 |
+
|
31 |
+
| params | d_model | n_heads | n_layers | d_ff | learning rate |
|
32 |
+
|:------:|:-------:|:-------:|:--------:|:-----:|:-------------:|
|
33 |
+
| 65B | 8192 | 64 | 80 | 22016 | 1.5e−4 |
|
34 |
+
|
35 |
+
## 底座数据介绍
|
36 |
+
|
37 |
在预训练阶段,**XVERSE-65B** 主要使用了 7 类不同的数据类型。以下表格展示了 XVERSE-65B 与其他一些知名模型在预训练数据集方面的比较:
|
38 |
|
39 |
| 数据类别 | [GPT3](https://arxiv.org/abs/2005.14165) | [Llama](https://arxiv.org/abs/2302.13971) | [BLOOM](https://arxiv.org/abs/2211.05100) | [PaLM](https://arxiv.org/abs/2204.02311) | [Chinchilla](https://arxiv.org/abs/2203.15556) | [Gopher](https://arxiv.org/abs/2112.11446) | [MT-NLG](https://arxiv.org/abs/2201.11990) | XVERSE-65B |
|
|
|
52 |
|:-------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
|
53 |
| 比例(%) | 72.91 | 7.09 | 4.81 | 5.62 | 6.55 | 1.15 | 1.87 |
|
54 |
|
55 |
+
在预训练阶段,**XVERSE-65B** 主要使用了 41 种自然语言,以下表格展示了不同语种在底座数据中的占比:
|
56 |
+
|
57 |
+
| 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) |
|
58 |
+
|:----:|:-------:|:----:|:-------:|:----:|:-------:|:----:|:-------:|:----:|:-------:|:----:|:-------:|
|
59 |
+
| en | 54.91 | pl | 0.48 | hu | 0.19 | ar | 0.12 | fa | 0.07 | sl | 0.05 |
|
60 |
+
| zh | 31.09 | it | 0.36 | ko | 0.18 | ro | 0.11 | hi | 0.07 | et | 0.04 |
|
61 |
+
| ja | 3.22 | pt | 0.34 | sv | 0.15 | bg | 0.10 | no | 0.07 | lv | 0.03 |
|
62 |
+
| ru | 3.15 | cs | 0.27 | el | 0.14 | th | 0.10 | ca | 0.06 | sr | 0.03 |
|
63 |
+
| de | 1.52 | uk | 0.24 | fi | 0.14 | da | 0.09 | iw | 0.06 | ta | 0.03 |
|
64 |
+
| es | 0.91 | tr | 0.23 | id | 0.13 | mr | 0.08 | lt | 0.05 | kk | 0.02 |
|
65 |
+
| fr | 0.73 | nl | 0.20 | vi | 0.13 | sk | 0.08 | ms | 0.05 | | |
|
66 |
+
|
67 |
+
> 注:各种语言简称的对照可参考:[ISO_639-1](https://zh.wikipedia.org/wiki/ISO_639-1)
|
68 |
+
|
69 |
+
对于代码类数据,以下表格展示了不同编程语言的占比:
|
70 |
+
|
71 |
+
| 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) | 语言 | 比例(%) |
|
72 |
+
|:----------:|:-------:|:------:|:-------:|:------------:|:-------:|:----------:|:-------:|:-------------:|:-------:|:-------:|:-------:|
|
73 |
+
| PHP | 17.06 | Go | 3.38 | Shell | 0.74 | PowerShell | 0.23 | Arduino | 0.13 | R | 0.04 |
|
74 |
+
| JavaScript | 15.65 | Rust | 2.33 | Haskell | 0.46 | Groovy | 0.21 | Assembly | 0.13 | ABAP | 0.01 |
|
75 |
+
| Java | 15.18 | Ruby | 1.61 | Common Lisp | 0.43 | Pascal | 0.20 | Clojure | 0.12 | COBOL | 0.0022 |
|
76 |
+
| Python | 14.64 | Swift | 1.40 | Perl | 0.34 | FORTRAN | 0.19 | Cuda | 0.12 | Verilog | 0.0001 |
|
77 |
+
| TypeScript | 6.55 | Kotlin | 1.40 | CSS | 0.32 | Elixir | 0.17 | VHDL | 0.09 | | |
|
78 |
+
| C | 4.84 | Scala | 1.08 | Julia | 0.32 | Solidity | 0.16 | Emacs Lisp | 0.08 | | |
|
79 |
+
| C++ | 4.68 | Dart | 0.95 | Visual Basic | 0.25 | F# | 0.14 | Objective-C++ | 0.08 | | |
|
80 |
+
| C# | 3.44 | SQL | 0.76 | OCaml | 0.24 | Erlang | 0.14 | Crystal | 0.06 | | |
|
81 |
+
|
82 |
## Model Introduction
|
83 |
|
84 |
**XVERSE-65B** is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology. The models released this time is the base model **XVERSE-65B**. Its key features are as follows:
|
|
|
88 |
- **Tokenization**: Based on the BPE (Byte-Pair Encoding) algorithm, a tokenizer with a vocabulary size of 100,534 has been trained using hundreds of gigabytes of language data. This tokenizer is capable of supporting multilingual without the need for additional vocabulary expansion.
|
89 |
- **Training Framework**: The training utilizes FlashAttention2 for accelerated computation, and on top of 3D parallelism, virtual pipeline technology is applied to reduce the excessive bubble rate caused by longer pipelines and 16k context windows. This achieves a peak computational efficiency within the industry-leading range in the petaflop-scale cluster. Concurrently, through continuous optimization of cluster infrastructure operations, resource scheduling, training frameworks, and the scheduling platform, a highly stable, low-interruption, and robust fault-tolerant training system has been developed, enhancing the effective weekly training rate to 98.6%.
|
90 |
|
91 |
+
The models sizes, architectures and learning rate of **XVERSE-65B** are showed as follows:
|
92 |
+
|
93 |
+
| params | d_model | n_heads | n_layers | d_ff | learning rate |
|
94 |
+
|:------:|:-------:|:-------:|:--------:|:-----:|:-------------:|
|
95 |
+
| 65B | 8192 | 64 | 80 | 22016 | 1.5e−4 |
|
96 |
+
|
97 |
+
## Introduction of Pre-training Data
|
98 |
+
|
99 |
During the pre-training phase, **XVERSE-65B** primarily utilized 7 different types of data. The following table shows a comparison of the pre-training datasets of XVERSE-65B with some other well-known models:
|
100 |
|
101 |
| Data Type | [GPT3](https://arxiv.org/abs/2005.14165) | [Llama](https://arxiv.org/abs/2302.13971) | [BLOOM](https://arxiv.org/abs/2211.05100) | [PaLM](https://arxiv.org/abs/2204.02311) | [Chinchilla](https://arxiv.org/abs/2203.15556) | [Gopher](https://arxiv.org/abs/2112.11446) | [MT-NLG](https://arxiv.org/abs/2201.11990) | XVERSE-65B |
|
|
|
114 |
|:--------------:|:---------:|:----:|:------------:|:-----:|:---------------:|:----:|:-----:|
|
115 |
| Proportion (%) | 72.91 | 7.09 | 4.81 | 5.62 | 6.55 | 1.15 | 1.87 |
|
116 |
|
117 |
+
During the pre-training phase, **XVERSE-65B** primarily used 41 kinds of natural language, and the following table shows the proportion of different languages in the pre-training data:
|
118 |
+
|
119 |
+
| Language | Proportion (%) | Language | Proportion (%) | Language | Proportion (%) | Language | Proportion (%) | Language | Proportion (%) | Language | Proportion (%) |
|
120 |
+
|:--------:|:--------------:|:--------:|:--------------:|:--------:|:--------------:|:--------:|:--------------:|:--------:|:--------------:|:--------:|:--------------:|
|
121 |
+
| en | 54.91 | pl | 0.48 | hu | 0.19 | ar | 0.12 | fa | 0.07 | sl | 0.05 |
|
122 |
+
| zh | 31.09 | it | 0.36 | ko | 0.18 | ro | 0.11 | hi | 0.07 | et | 0.04 |
|
123 |
+
| ja | 3.22 | pt | 0.34 | sv | 0.15 | bg | 0.10 | no | 0.07 | lv | 0.03 |
|
124 |
+
| ru | 3.15 | cs | 0.27 | el | 0.14 | th | 0.10 | ca | 0.06 | sr | 0.03 |
|
125 |
+
| de | 1.52 | uk | 0.24 | fi | 0.14 | da | 0.09 | iw | 0.06 | ta | 0.03 |
|
126 |
+
| es | 0.91 | tr | 0.23 | id | 0.13 | mr | 0.08 | lt | 0.05 | kk | 0.02 |
|
127 |
+
| fr | 0.73 | nl | 0.20 | vi | 0.13 | sk | 0.08 | ms | 0.05 | | |
|
128 |
+
|
129 |
+
> Note: Reference to the abbreviations of different languages: [ISO_639-1](https://zh.wikipedia.org/wiki/ISO_639-1)
|
130 |
+
|
131 |
+
For the Code data, the following table shows the proportion of different programming languages:
|
132 |
+
|
133 |
+
| Programming Language | Proportion (%) | Programming Language | Proportion (%) | Programming Language | Proportion (%) | Programming Language | Proportion (%) | Programming Language | Proportion (%) | Programming Language | Proportion (%) |
|
134 |
+
|:--------------------:|:--------------:|:--------------------:|:--------------:|:--------------------:|:--------------:|:--------------------:|:--------------:|:--------------------:|:--------------:|:--------------------:|:--------------:|
|
135 |
+
| PHP | 17.06 | Go | 3.38 | Shell | 0.74 | PowerShell | 0.23 | Arduino | 0.13 | R | 0.04 |
|
136 |
+
| JavaScript | 15.65 | Rust | 2.33 | Haskell | 0.46 | Groovy | 0.21 | Assembly | 0.13 | ABAP | 0.01 |
|
137 |
+
| Java | 15.18 | Ruby | 1.61 | Common Lisp | 0.43 | Pascal | 0.20 | Clojure | 0.12 | COBOL | 0.0022 |
|
138 |
+
| Python | 14.64 | Swift | 1.40 | Perl | 0.34 | FORTRAN | 0.19 | Cuda | 0.12 | Verilog | 0.0001 |
|
139 |
+
| TypeScript | 6.55 | Kotlin | 1.40 | CSS | 0.32 | Elixir | 0.17 | VHDL | 0.09 | | |
|
140 |
+
| C | 4.84 | Scala | 1.08 | Julia | 0.32 | Solidity | 0.16 | Emacs Lisp | 0.08 | | |
|
141 |
+
| C++ | 4.68 | Dart | 0.95 | Visual Basic | 0.25 | F# | 0.14 | Objective-C++ | 0.08 | | |
|
142 |
+
| C# | 3.44 | SQL | 0.76 | OCaml | 0.24 | Erlang | 0.14 | Crystal | 0.06 | | |
|
143 |
+
|
144 |
## 评测结果
|
145 |
|
146 |
为了综合评估模型的性能,我们在一系列标准数据集上进行了全面测试,包括C-Eval、CMMLU、Gaokao-Bench、MMLU、GAOKAO-English、AGIEval、RACE-M、CommonSenseQA、PIQA、GSM8K和HumanEval。这些评估覆盖了模型在多个领域的能力,具体包括中文问答、英文问答、语言理解、常识问答、逻辑推理、数学问题解答以及编程能力。评估结果如下:
|