uer commited on
Commit
d182265
·
1 Parent(s): 4e1e1bd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -16
README.md CHANGED
@@ -12,6 +12,8 @@ widget:
12
 
13
  This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://www.aclweb.org/anthology/D19-3041.pdf).
14
 
 
 
15
  You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
16
 
17
  | |H=128|H=256|H=512|H=768|
@@ -21,7 +23,22 @@ You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Githu
21
  | **L=6** |[6/128]|[6/256]|[6/512]|[6/768]|
22
  | **L=8** |[8/128]|[8/256]|[**8/512 (Medium)**][8_512]|[8/768]|
23
  | **L=10** |[10/128]|[10/256]|[10/512]|[10/768]|
24
- | **L=12** |[12/128]|[12/256]|[12/512]|[**12/768 (Base)**]|
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## How to use
27
 
@@ -81,7 +98,7 @@ output = model(encoded_input)
81
 
82
  ## Training data
83
 
84
- CLUECorpus2020 and CLUECorpusSmall are used as training data.
85
 
86
  ## Training procedure
87
 
@@ -89,40 +106,40 @@ Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent
89
 
90
  Stage1:
91
  ```
92
- python3 preprocess.py --corpus_path corpora/cluecorpus.txt \
93
  --vocab_path models/google_zh_vocab.txt \
94
- --dataset_path cluecorpus_seq128_dataset.pt \
95
  --processes_num 32 --seq_length 128 \
96
  --dynamic_masking --target mlm
97
  ```
98
  ```
99
- python3 pretrain.py --dataset_path cluecorpus_seq128_dataset.pt \
100
  --vocab_path models/google_zh_vocab.txt \
101
- --config_path models/bert_small_config.json \
102
- --output_model_path models/cluecorpus_roberta_small_seq128_model.bin \
103
  --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
104
  --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
105
  --learning_rate 1e-4 --batch_size 64 \
106
- --tie_weights --encoder bert --target mlm
107
  ```
108
  Stage2:
109
  ```
110
- python3 preprocess.py --corpus_path corpora/cluecorpus.txt \
111
  --vocab_path models/google_zh_vocab.txt \
112
- --dataset_path cluecorpus_seq512_dataset.pt \
113
  --processes_num 32 --seq_length 512 \
114
  --dynamic_masking --target mlm
115
  ```
116
  ```
117
- python3 pretrain.py --dataset_path cluecorpus_seq512_dataset.pt \
118
- --pretrained_model_path models/cluecorpus_roberta_small_seq128_model.bin-1000000 \
119
  --vocab_path models/google_zh_vocab.txt \
120
- --config_path models/bert_small_config.json \
121
- --output_model_path models/cluecorpus_roberta_small_seq512_model.bin \
122
  --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
123
  --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
124
  --learning_rate 5e-5 --batch_size 16 \
125
- --tie_weights --encoder bert --target mlm
126
  ```
127
 
128
  ### BibTeX entry and citation info
@@ -139,6 +156,7 @@ python3 pretrain.py --dataset_path cluecorpus_seq512_dataset.pt \
139
 
140
  [2_128]: https://huggingface.co/uer/chinese_roberta_L-2_H-128
141
  [4_256]: https://huggingface.co/uer/chinese_roberta_L-4_H-256
142
- [8_512]: https://huggingface.co/uer/chinese_roberta_L-8_H-512
143
  [4_512]: https://huggingface.co/uer/chinese_roberta_L-4_H-512
 
 
144
 
 
12
 
13
  This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://www.aclweb.org/anthology/D19-3041.pdf).
14
 
15
+ [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
16
+
17
  You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
18
 
19
  | |H=128|H=256|H=512|H=768|
 
23
  | **L=6** |[6/128]|[6/256]|[6/512]|[6/768]|
24
  | **L=8** |[8/128]|[8/256]|[**8/512 (Medium)**][8_512]|[8/768]|
25
  | **L=10** |[10/128]|[10/256]|[10/512]|[10/768]|
26
+ | **L=12** |[12/128]|[12/256]|[12/512]|[**12/768 (Base)**][12_768]|
27
+
28
+ Here are scores on the devlopment set of six Chinese tasks:
29
+
30
+ |Model|Score|douban|chnsenticorp|lcqmc|tnews(CLUE)|iflytek(CLUE)|ocnli(CLUE)|
31
+ |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
32
+ |BERT-Tiny|0.0|0.0|83.2|0.0|0.0|0.0|70.2|
33
+ |BERT-Mini|0.0|0.0|85.9|0.0|75.4/73.3|0.0|70.2|
34
+ |BERT-Small|0.0|27.8|89.7|83.4|78.8|68.1|77.6|
35
+ |BERT-Medium|0.0|38.0|89.6|86.6|80.4|69.6|80.0|
36
+ |BERT-Base|0.0|38.0|89.6|86.6|80.4|69.6|80.0|
37
+
38
+ For each task, we selected the best fine-tuning hyperparameters from the lists below:
39
+ - epochs: 3, 5, 8
40
+ - batch sizes: 32, 64
41
+ - learning rates: 3e-5, 1e-4, 3e-4
42
 
43
  ## How to use
44
 
 
98
 
99
  ## Training data
100
 
101
+ CLUECorpusSmall is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
102
 
103
  ## Training procedure
104
 
 
106
 
107
  Stage1:
108
  ```
109
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
110
  --vocab_path models/google_zh_vocab.txt \
111
+ --dataset_path cluecorpussmall_seq128_dataset.pt \
112
  --processes_num 32 --seq_length 128 \
113
  --dynamic_masking --target mlm
114
  ```
115
  ```
116
+ python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
117
  --vocab_path models/google_zh_vocab.txt \
118
+ --config_path models/bert_xxx_config.json \
119
+ --output_model_path models/cluecorpussmall_roberta_xxxx_seq128_model.bin \
120
  --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
121
  --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
122
  --learning_rate 1e-4 --batch_size 64 \
123
+ --tie_weights --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm
124
  ```
125
  Stage2:
126
  ```
127
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
128
  --vocab_path models/google_zh_vocab.txt \
129
+ --dataset_path cluecorpussmall_seq512_dataset.pt \
130
  --processes_num 32 --seq_length 512 \
131
  --dynamic_masking --target mlm
132
  ```
133
  ```
134
+ python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
135
+ --pretrained_model_path models/cluecorpussmall_roberta_xxx_seq128_model.bin-1000000 \
136
  --vocab_path models/google_zh_vocab.txt \
137
+ --config_path models/bert_xxx_config.json \
138
+ --output_model_path models/cluecorpussmall_roberta_xxx_seq512_model.bin \
139
  --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
140
  --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
141
  --learning_rate 5e-5 --batch_size 16 \
142
+ --tie_weights --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm
143
  ```
144
 
145
  ### BibTeX entry and citation info
 
156
 
157
  [2_128]: https://huggingface.co/uer/chinese_roberta_L-2_H-128
158
  [4_256]: https://huggingface.co/uer/chinese_roberta_L-4_H-256
 
159
  [4_512]: https://huggingface.co/uer/chinese_roberta_L-4_H-512
160
+ [8_512]: https://huggingface.co/uer/chinese_roberta_L-8_H-512
161
+ [12_768]: https://huggingface.co/uer/chinese_roberta_L-12_H-768
162