abhik1505040
commited on
Commit
·
7bed1e3
1
Parent(s):
5cde4ed
Update README.md
Browse files
README.md
CHANGED
@@ -46,7 +46,8 @@ print("\n" + "-" * 50)
|
|
46 |
|----------------|-----------|-----------|-----------|-----------|-----------|-----------|
|
47 |
|[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 27.05 | 62.22 | 39.27 | 59.01/64.18 | 50.35 |
|
48 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 42.03 | 72.18 | 45.37 | 55.03/61.83 | 55.29 |
|
49 |
-
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M |
|
|
|
50 |
|
51 |
* Supervised fine-tuning
|
52 |
|
@@ -56,11 +57,11 @@ print("\n" + "-" * 50)
|
|
56 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 69.54 | 78.46 | 73.32 | 68.09/74.27 | 72.82 |
|
57 |
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 70.97 | 82.40 | 78.39 | 73.15/79.06 | 76.79 |
|
58 |
|[sahajBERT](https://huggingface.co/neuropark/sahajBERT) | 18M | 71.12 | 76.92 | 70.94 | 65.48/70.69 | 71.03 |
|
|
|
59 |
|[BanglaBERT](https://huggingface.co/csebuetnlp/banglabert) | 110M | 72.89 | 82.80 | 77.78 | 72.63/79.34 | **77.09** |
|
60 |
|
61 |
|
62 |
|
63 |
-
|
64 |
The benchmarking datasets are as follows:
|
65 |
* **SC:** **[Sentiment Classification](https://aclanthology.org/2021.findings-emnlp.278)**
|
66 |
* **NER:** **[Named Entity Recognition](https://multiconer.github.io/competition)**
|
|
|
46 |
|----------------|-----------|-----------|-----------|-----------|-----------|-----------|
|
47 |
|[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 27.05 | 62.22 | 39.27 | 59.01/64.18 | 50.35 |
|
48 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 42.03 | 72.18 | 45.37 | 55.03/61.83 | 55.29 |
|
49 |
+
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 49.49 | 78.13 | 56.48 | 71.13/77.70 | 66.59 |
|
50 |
+
|[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 48.39 | 75.26 | 55.56 | 72.87/78.63 | 66.14 |
|
51 |
|
52 |
* Supervised fine-tuning
|
53 |
|
|
|
57 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 69.54 | 78.46 | 73.32 | 68.09/74.27 | 72.82 |
|
58 |
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 70.97 | 82.40 | 78.39 | 73.15/79.06 | 76.79 |
|
59 |
|[sahajBERT](https://huggingface.co/neuropark/sahajBERT) | 18M | 71.12 | 76.92 | 70.94 | 65.48/70.69 | 71.03 |
|
60 |
+
|[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 70.61 | 80.95 | 76.28 | 72.43/78.40 | 75.73 |
|
61 |
|[BanglaBERT](https://huggingface.co/csebuetnlp/banglabert) | 110M | 72.89 | 82.80 | 77.78 | 72.63/79.34 | **77.09** |
|
62 |
|
63 |
|
64 |
|
|
|
65 |
The benchmarking datasets are as follows:
|
66 |
* **SC:** **[Sentiment Classification](https://aclanthology.org/2021.findings-emnlp.278)**
|
67 |
* **NER:** **[Named Entity Recognition](https://multiconer.github.io/competition)**
|