Update README.md
#1
by
jnishi
- opened
README.md
CHANGED
@@ -1,199 +1,148 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
|
|
|
|
|
|
|
|
|
31 |
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
|
|
|
|
|
39 |
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
-
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
-
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
|
62 |
-
|
|
|
63 |
|
64 |
-
|
|
|
|
|
|
|
65 |
|
66 |
-
|
|
|
|
|
67 |
|
68 |
-
|
69 |
|
70 |
-
|
|
|
|
|
71 |
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
|
|
|
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
|
93 |
#### Training Hyperparameters
|
|
|
94 |
|
95 |
-
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
|
|
|
|
104 |
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
|
119 |
-
|
120 |
|
121 |
-
|
|
|
122 |
|
123 |
-
|
|
|
|
|
|
|
|
|
124 |
|
125 |
-
|
|
|
|
|
|
|
126 |
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
[
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
|
165 |
-
|
166 |
|
167 |
#### Software
|
168 |
|
169 |
-
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
|
179 |
-
|
180 |
|
181 |
-
|
182 |
|
183 |
-
##
|
184 |
|
185 |
-
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
|
197 |
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
language:
|
5 |
+
- ja
|
6 |
+
- en
|
7 |
---
|
8 |
|
9 |
+
# Retrieva BERT Model
|
10 |
+
The **RetrievaBERT** is the pre-trained Transformer Encoder using Megatron-LM.
|
11 |
+
It is designed for use in Japanese.
|
|
|
|
|
12 |
|
13 |
## Model Details
|
14 |
|
15 |
### Model Description
|
16 |
|
17 |
+
The **RetrievaBERT** is the pre-trained Transformer Encoder using Megatron-LM.
|
|
|
|
|
18 |
|
19 |
+
It is designed for use in Japanese.
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
+
This model offers several advanced features compared to traditional BERT models:
|
22 |
+
- **PreNorm**: Improved stability during training.
|
23 |
+
- **SwiGLU**: Enhanced activation function for better performance.
|
24 |
+
- **Grouped-Query Attention (Multi-Query Attention)**: Efficient attention mechanism.
|
25 |
+
- **Max Sequence Length**: 2048 tokens, allowing for longer context.
|
26 |
+
- **Parameters**: 1.3 billion parameters.
|
27 |
+
- **Pre-training Objective**: Only Masked Language Modeling (MLM), not Next Sentence Prediction (NSP).
|
28 |
+
- **Token Type IDs**: Not used in this model.
|
29 |
|
30 |
+
### Model Sources
|
31 |
+
- **Developed by:** Retrieva, Inc.
|
32 |
+
- **Model type:** Based on MegatronBERT Architecture.
|
33 |
+
- **Language(s) (NLP):** Primarily Japanese (optional support for English).
|
34 |
+
- **License:** Apache 2.0
|
35 |
|
|
|
|
|
|
|
36 |
|
37 |
## Uses
|
38 |
|
39 |
+
This model can be used as a Masked Language Model (MLM).
|
40 |
+
However, it is primarily intended to be fine-tuned on downstream tasks.
|
41 |
+
Depending on your use case, follow the appropriate section below.
|
42 |
|
43 |
### Direct Use
|
44 |
|
45 |
+
This model is pre-trained using Masked Language Modeling.
|
46 |
+
The mask token used is `<MASK|LLM-jp>`.
|
47 |
+
Note that you need to set `trust_remote_code` to `True` because RetrievaBERT uses a custom model implementation.
|
48 |
+
|
49 |
+
Example code for direct use:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
|
51 |
+
```python
|
52 |
+
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
|
53 |
|
54 |
+
model_id = "retrieva-jp/bert-1.3b"
|
55 |
+
model = AutoModelForMaskedLM.from_pretrained(model_id, trust_remote_code=True)
|
56 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
57 |
+
pipe = pipeline("fill-mask", model=model, tokenizer=tokenizer)
|
58 |
|
59 |
+
text = "こんにちは!私の名前は<MASK|LLM-jp>です!"
|
60 |
+
print(pipe(text))
|
61 |
+
```
|
62 |
|
63 |
+
### Downstream Use
|
64 |
|
65 |
+
RetrievaBERT is compatible with Hugging Face's AutoModels.
|
66 |
+
To fine-tune RetrievaBERT for your specific task, use the corresponding AutoModel class.
|
67 |
+
For detailed configuration, refer to the config.json file.
|
68 |
|
|
|
|
|
|
|
69 |
|
70 |
## Training Details
|
71 |
|
72 |
### Training Data
|
73 |
+
The Retrieva BERT model was pre-trained on the reunion of five datasets:
|
74 |
+
- [Japanese CommonCrawl Dataset by LLM-jp](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v2).
|
75 |
+
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb).
|
76 |
+
- Chinese Wikipedia dumped on 20240120.
|
77 |
+
- Korean Wikipedia dumped on 20240120.
|
78 |
+
- [The Stack](https://huggingface.co/datasets/bigcode/the-stack)
|
79 |
|
80 |
+
The model was trained on 180 billion tokens using the above dataset.
|
|
|
|
|
81 |
|
82 |
### Training Procedure
|
83 |
+
The model was trained on 4 to 32 H100 GPUs with a batch size of 1,024.
|
84 |
+
We adopted the curriculum learning which is similar to the Sequence Length Warmup and training with the following sequence lengths and number of steps.
|
85 |
|
86 |
+
- The sequence length of 128: 31,000 steps.
|
87 |
+
- The sequence length of 256: 219,000 steps.
|
88 |
+
- The sequence length of 512: 192,000 steps.
|
89 |
+
- The sequence length of 2048: 12,000 steps.
|
|
|
|
|
90 |
|
91 |
#### Training Hyperparameters
|
92 |
+
The model was trained on the following hyperparameters.
|
93 |
|
94 |
+
- Learning rate: 1.5e-4.
|
95 |
+
- Learning rate decay style: Linear.
|
96 |
+
- Learning rate warmup fraction: 0.01
|
97 |
+
- Minimum learning rate: 1e-6
|
98 |
+
- Floating point expression: BF16
|
|
|
|
|
99 |
|
100 |
## Evaluation
|
101 |
+
We fine-tuned the following models and evaluated them on the [JGLUE](https://github.com/yahoojapan/JGLUE) development set.
|
102 |
+
We adjusted the learning rate and training epochs for each model and task in accordance with [the JGLUE paper](https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_pdf/-char/ja).
|
103 |
|
104 |
+
| Model | MARC-ja/acc | JSTS/pearson | JSTS/spearman | JNLI/acc | JSQuAD/EM | JSQuAD/F1 | JComQA/acc |
|
105 |
+
| :--- |---:|---:|---:|---:|---:|---:|---:|
|
106 |
+
| tohoku-nlp/bert-base-japanese-v3 | 0.957 | 0.914 | 0.876 | 0.906 | 0.878 | 0.946 | 0.849 |
|
107 |
+
| tohoku-nlp/bert-large-japanese-v2| 0.959 | 0.916 | 0.877 | 0.901 | 0.884 | 0.951 | 0.867 |
|
108 |
+
| ku-nlp/deberta-v3-base-japanese | 0.958 | 0.925 | 0.890 | 0.902 | 0.925 | 0.910 | 0.882 |
|
109 |
+
| retrieva-jp/bert-1.3b | 0.952 | 0.916 | 0.877 | 0.896 | 0.916 | 0.879 | 0.815 |
|
|
|
|
|
|
|
|
|
|
|
110 |
|
|
|
111 |
|
112 |
+
## Technical Specifications
|
113 |
|
114 |
+
### Model Architectures
|
115 |
+
The Retrieva BERT model is based on BERT with the following hyperparameters:
|
116 |
|
117 |
+
- Number of layers: 48
|
118 |
+
- Hidden layer size: 1536
|
119 |
+
- FFN hidden layer size: 4096
|
120 |
+
- Number of attention heads: 24
|
121 |
+
- Maximum length of position embeddings: 2048
|
122 |
|
123 |
+
As mentioned earlier, the main differences from the original BERT are:
|
124 |
+
- PreNorm: Improved stability during training.
|
125 |
+
- SwiGLU: Enhanced activation function for better performance.
|
126 |
+
- Grouped-Query Attention (Multi-Query Attention): Efficient attention mechanism.
|
127 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
128 |
|
129 |
### Compute Infrastructure
|
130 |
|
131 |
+
[TSUBAME 4](https://www.t4.gsic.titech.ac.jp/en/hardware)
|
|
|
|
|
132 |
|
133 |
+
This model is based on results obtained from the TSUBAME deep-learning mini-camp.
|
134 |
|
135 |
#### Software
|
136 |
|
137 |
+
The model was trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
138 |
|
139 |
+
## More Information
|
140 |
|
141 |
+
https://note.com/retrieva/n/n715bea2c2cd1 (in Japanese)
|
142 |
|
143 |
+
## Model Card Authors
|
144 |
|
145 |
+
Satoru Katsumata, Daisuke Kimura, Jiro Nishitoba
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
146 |
|
147 |
## Model Card Contact
|
148 | |
|