Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,66 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- it
|
5 |
+
widget:
|
6 |
+
- text: "una fantastica [MASK] di #calcio! grande prestazione del mister e della squadra"
|
7 |
+
example_title: "Example 1"
|
8 |
+
- text: "il governo [MASK] dovrebbe fare politica, non soltanto propaganda! #vergogna"
|
9 |
+
example_title: "Example 2"
|
10 |
+
- text: "che serata da sogno sul #redcarpet! grazie a tutti gli attori e registi del [MASK] italiano #oscar #awards"
|
11 |
+
example_title: "Example 3"
|
12 |
---
|
13 |
+
|
14 |
+
--------------------------------------------------------------------------------------------------
|
15 |
+
|
16 |
+
<body>
|
17 |
+
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
|
18 |
+
<br>
|
19 |
+
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> </span>
|
20 |
+
<br>
|
21 |
+
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: BERT-TWEET</span>
|
22 |
+
<br>
|
23 |
+
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
|
24 |
+
<br>
|
25 |
+
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
|
26 |
+
<br>
|
27 |
+
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
|
28 |
+
</body>
|
29 |
+
|
30 |
+
--------------------------------------------------------------------------------------------------
|
31 |
+
|
32 |
+
<h3>Model description</h3>
|
33 |
+
|
34 |
+
This is a <b>BERT</b> <b>[1]</b> uncased model for the <b>Italian</b> language, obtained using <b>TwHIN-BERT</b> <b>[2]</b> ([twhin-bert-base](https://huggingface.co/Twitter/twhin-bert-base)) as a starting point and focusing it on the Italian language by modifying the embedding layer
|
35 |
+
(as in <b>[3]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset)
|
36 |
+
|
37 |
+
The resulting model has 110M parameters, a vocabulary of 30.520 tokens, and a size of ~440 MB.
|
38 |
+
|
39 |
+
<h3>Quick usage</h3>
|
40 |
+
|
41 |
+
```python
|
42 |
+
from transformers import BertTokenizerFast, BertModel
|
43 |
+
|
44 |
+
tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-tweet-base-italian-uncased")
|
45 |
+
model = BertModel.from_pretrained("osiria/bert-tweet-base-italian-uncased")
|
46 |
+
```
|
47 |
+
|
48 |
+
Here you can find the find the model already fine-tuned on Sentiment Analysis: https://huggingface.co/osiria/bert-tweet-italian-uncased-sentiment
|
49 |
+
|
50 |
+
<h3>References</h3>
|
51 |
+
|
52 |
+
[1] https://arxiv.org/abs/1810.04805
|
53 |
+
|
54 |
+
[2] https://arxiv.org/abs/2209.07562
|
55 |
+
|
56 |
+
[3] https://arxiv.org/abs/2010.05609
|
57 |
+
|
58 |
+
<h3>Limitations</h3>
|
59 |
+
|
60 |
+
This model was trained on tweets, so it's mainly suitable for general-purpose social media text processing, involving short texts written in a social network style.
|
61 |
+
It might show limitations when it comes to longer and more structured text, or domain-specific text.
|
62 |
+
|
63 |
+
<h3>License</h3>
|
64 |
+
|
65 |
+
The model is released under <b>Apache-2.0</b> license
|
66 |
+
|