MarcosDib commited on
Commit
b8cd1aa
1 Parent(s): ed1c5df

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +261 -0
README.md ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - Clsssification
5
+ license: apache-2.0
6
+ datasets:
7
+ - bookcorpus
8
+ - wikipedia
9
+ ---
10
+
11
+ # DRAFT MCTI Text Classification Task (case/uncased)
12
+
13
+ #< Model Name >
14
+ DISCLAIMER:
15
+
16
+
17
+
18
+ ## According to the abstract,
19
+ • .................................... 
20
+
21
+
22
+ ## Model description
23
+
24
+ Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam sed nibh non enim finibus malesuada. In vitae
25
+ metus orci. Vestibulum sodales volutpat lorem, eget consectetur nisi viverra vitae. Sed tincidunt accumsan
26
+ pellentesque. Curabitur urna massa, dapibus sit amet augue quis, aliquam tristique ipsum. In hac habitasse
27
+ platea dictumst. Fusce aliquet est id mi porttitor tincidunt. Ut imperdiet rutrum eros, ac mollis ipsum
28
+ auctor ut. Donec lacinia, orci et dignissim molestie, sem ex mollis urna, et blandit nisi leo sit amet mauris.
29
+
30
+ Nullam pretium condimentum imperdiet.
31
+
32
+ Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
33
+ the Hugging Face team.
34
+
35
+ ## Model description
36
+
37
+ Nullam congue hendrerit turpis et facilisis. Cras accumsan ante mi, eu hendrerit nulla finibus at. Donec imperdiet,
38
+ nisi nec pulvinar suscipit, dolor nulla sagittis massa, et vehicula ante felis quis nibh. Lorem ipsum dolor sit amet,
39
+ consectetur adipiscing elit. Maecenas viverra tempus risus non ornare. Donec in vehicula est. Pellentesque vulputate
40
+ bibendum cursus. Nunc volutpat vitae neque ut bibendum:
41
+
42
+ - Nullam congue hendrerit turpis et facilisis. Cras accumsan ante mi, eu hendrerit nulla finibus at. Donec imperdiet,
43
+ nisi nec pulvinar suscipit, dolor nulla sagittis massa, et vehicula ante felis quis nibh. Lorem ipsum dolor sit amet,
44
+ consectetur adipiscing elit.
45
+ - Nullam congue hendrerit turpis et facilisis. Cras accumsan ante mi, eu hendrerit nulla finibus at. Donec imperdiet,
46
+ nisi nec pulvinar suscipit, dolor nulla sagittis massa, et vehicula ante felis quis nibh. Lorem ipsum dolor sit amet,
47
+ consectetur adipiscing elit.
48
+
49
+ Nullam congue hendrerit turpis et facilisis. Cras accumsan ante mi, eu hendrerit nulla finibus at. Donec imperdiet,
50
+ nisi nec pulvinar suscipit, dolor nulla sagittis massa, et vehicula ante felis quis nibh. Lorem ipsum dolor sit amet,
51
+ consectetur adipiscing elit. Maecenas viverra tempus risus non ornare. Donec in vehicula est. Pellentesque vulputate
52
+ bibendum cursus. Nunc volutpat vitae neque ut bibendum.
53
+
54
+ ## Model variations
55
+
56
+ BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
57
+ Chinese and multilingual uncased and cased versions followed shortly after.
58
+ Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
59
+ Other 24 smaller models are released afterward.
60
+
61
+ The detailed release history can be found on the [google-research/bert readme](https://www.google.com) on github.
62
+
63
+ | Model | #params | Language |
64
+ |------------------------|--------------------------------|-------|
65
+ | [`mcti-base-uncased`]| 110M | English |
66
+ | [`mcti-large-uncased`]| 340M | English | sub
67
+ | [`mcti-base-cased`]| | 110M | English |
68
+ | [`mcti-large-cased`] | 110M | Chinese |
69
+ | [`-base-multilingual-cased`] | 110M | Multiple |
70
+
71
+ ## Intended uses
72
+
73
+ You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
74
+ be fine-tuned on a downstream task. See the [model hub](https://www.google.com) to look for
75
+ fine-tuned versions of a task that interests you.
76
+
77
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
78
+ to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
79
+ generation you should look at model like XXX.
80
+
81
+ ### How to use
82
+
83
+ You can use this model directly with a pipeline for masked language modeling:
84
+
85
+ ```python
86
+ >>> from transformers import pipeline
87
+ >>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
88
+ >>> unmasker("Hello I'm a [MASK] model.")
89
+
90
+ [{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
91
+ 'score': 0.1073106899857521,
92
+ 'token': 4827,
93
+ 'token_str': 'fashion'},
94
+ {'sequence': "[CLS] hello i'm a role model. [SEP]",
95
+ 'score': 0.08774490654468536,
96
+ 'token': 2535,
97
+ 'token_str': 'role'},
98
+ {'sequence': "[CLS] hello i'm a new model. [SEP]",
99
+ 'score': 0.05338378623127937,
100
+ 'token': 2047,
101
+ 'token_str': 'new'},
102
+ {'sequence': "[CLS] hello i'm a super model. [SEP]",
103
+ 'score': 0.04667217284440994,
104
+ 'token': 3565,
105
+ 'token_str': 'super'},
106
+ {'sequence': "[CLS] hello i'm a fine model. [SEP]",
107
+ 'score': 0.027095865458250046,
108
+ 'token': 2986,
109
+ 'token_str': 'fine'}]
110
+ ```
111
+
112
+ Here is how to use this model to get the features of a given text in PyTorch:
113
+
114
+ ```python
115
+ from transformers import BertTokenizer, BertModel
116
+ tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
117
+ model = BertModel.from_pretrained("bert-base-uncased")
118
+ text = "Replace me by any text you'd like."
119
+ encoded_input = tokenizer(text, return_tensors='pt')
120
+ output = model(**encoded_input)
121
+ ```
122
+
123
+ and in TensorFlow:
124
+
125
+ ```python
126
+ from transformers import BertTokenizer, TFBertModel
127
+ tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
128
+ model = TFBertModel.from_pretrained("bert-base-uncased")
129
+ text = "Replace me by any text you'd like."
130
+ encoded_input = tokenizer(text, return_tensors='tf')
131
+ output = model(encoded_input)
132
+ ```
133
+
134
+ ### Limitations and bias
135
+
136
+ Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
137
+ predictions:
138
+
139
+ ```python
140
+ >>> from transformers import pipeline
141
+ >>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
142
+ >>> unmasker("The man worked as a [MASK].")
143
+
144
+ [{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
145
+ 'score': 0.09747550636529922,
146
+ 'token': 10533,
147
+ 'token_str': 'carpenter'},
148
+ {'sequence': '[CLS] the man worked as a waiter. [SEP]',
149
+ 'score': 0.0523831807076931,
150
+ 'token': 15610,
151
+ 'token_str': 'waiter'},
152
+ {'sequence': '[CLS] the man worked as a barber. [SEP]',
153
+ 'score': 0.04962705448269844,
154
+ 'token': 13362,
155
+ 'token_str': 'barber'},
156
+ {'sequence': '[CLS] the man worked as a mechanic. [SEP]',
157
+ 'score': 0.03788609802722931,
158
+ 'token': 15893,
159
+ 'token_str': 'mechanic'},
160
+ {'sequence': '[CLS] the man worked as a salesman. [SEP]',
161
+ 'score': 0.037680890411138535,
162
+ 'token': 18968,
163
+ 'token_str': 'salesman'}]
164
+
165
+ >>> unmasker("The woman worked as a [MASK].")
166
+
167
+ [{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
168
+ 'score': 0.21981462836265564,
169
+ 'token': 6821,
170
+ 'token_str': 'nurse'},
171
+ {'sequence': '[CLS] the woman worked as a waitress. [SEP]',
172
+ 'score': 0.1597415804862976,
173
+ 'token': 13877,
174
+ 'token_str': 'waitress'},
175
+ {'sequence': '[CLS] the woman worked as a maid. [SEP]',
176
+ 'score': 0.1154729500412941,
177
+ 'token': 10850,
178
+ 'token_str': 'maid'},
179
+ {'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
180
+ 'score': 0.037968918681144714,
181
+ 'token': 19215,
182
+ 'token_str': 'prostitute'},
183
+ {'sequence': '[CLS] the woman worked as a cook. [SEP]',
184
+ 'score': 0.03042375110089779,
185
+ 'token': 5660,
186
+ 'token_str': 'cook'}]
187
+ ```
188
+
189
+ This bias will also affect all fine-tuned versions of this model.
190
+
191
+ ## Training data
192
+
193
+ The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
194
+ unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
195
+ headers).
196
+
197
+
198
+ ## Training procedure
199
+
200
+ ### Preprocessing
201
+
202
+ The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
203
+ then of the form:
204
+
205
+ ```
206
+ [CLS] Sentence A [SEP] Sentence B [SEP]
207
+ ```
208
+
209
+ With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
210
+ the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
211
+ consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
212
+ "sentences" has a combined length of less than 512 tokens.
213
+
214
+ The details of the masking procedure for each sentence are the following:
215
+ - 15% of the tokens are masked.
216
+ - In 80% of the cases, the masked tokens are replaced by `[MASK]`.
217
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
218
+ - In the 10% remaining cases, the masked tokens are left as is.
219
+
220
+ ### Pretraining
221
+
222
+ The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
223
+ of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
224
+ used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
225
+ learning rate warmup for 10,000 steps and linear decay of the learning rate after.
226
+
227
+ ## Evaluation results
228
+
229
+ When fine-tuned on downstream tasks, this model achieves the following results:
230
+
231
+ Glue test results:
232
+
233
+ | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
234
+ |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
235
+ | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
236
+
237
+ #< Checkpoints >
238
+ #Examples
239
+ #Implementation Notes
240
+ #Usage Example
241
+ #>>> 
242
+ #...
243
+
244
+ #< Config >
245
+ <# Tokenizer >
246
+ #< Training data >
247
+ #< Training procedure >
248
+ #< Preprocessing >
249
+ #< Pretraining >
250
+ #< Evaluation results >
251
+ #< BibTeX entry and citation info >
252
+ #< Benchmarks >
253
+ ### BibTeX entry and citation info
254
+
255
+ ```bibtex
256
+
257
+ ```
258
+
259
+ <a href="https://huggingface.co/exbert/?model=bert-base-uncased">
260
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
261
+ </a>