nguyenvulebinh
commited on
Commit
·
e89f4c3
1
Parent(s):
4f637e4
add checkpoint top1 vlsp mrc 2021
Browse files
README.md
CHANGED
@@ -35,9 +35,12 @@ This model is intended to be used for QA in the Vietnamese language so the valid
|
|
35 |
|
36 |
| Model | EM | F1 |
|
37 |
| ------------- | ------------- | ------------- |
|
38 |
-
| [large](https://huggingface.co/nguyenvulebinh/vi-mrc-large)
|
39 |
-
| [large](https://huggingface.co/nguyenvulebinh/vi-mrc-large)
|
40 |
|
|
|
|
|
|
|
41 |
|
42 |
[MRCQuestionAnswering](https://github.com/nguyenvulebinh/extractive-qa-mrc) using [XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html) as a pre-trained language model. By default, XLM-RoBERTa will split word in to sub-words. But in my implementation, I re-combine sub-words representation (after encoded by BERT layer) into word representation using sum strategy.
|
43 |
|
|
|
35 |
|
36 |
| Model | EM | F1 |
|
37 |
| ------------- | ------------- | ------------- |
|
38 |
+
| [large](https://huggingface.co/nguyenvulebinh/vi-mrc-large) public_test_set | 85.847 | 83.826 |
|
39 |
+
| [large](https://huggingface.co/nguyenvulebinh/vi-mrc-large) private_test_set | 82.072 | 78.071 |
|
40 |
|
41 |
+
Public leaderboard | Private leaderboard
|
42 |
+
:-------------------------:|:-------------------------:
|
43 |
+
![](https://i.ibb.co/tJX6V6T/public-leaderboard.jpg) | ![](https://i.ibb.co/nmsX2pG/private-leaderboard.jpg)
|
44 |
|
45 |
[MRCQuestionAnswering](https://github.com/nguyenvulebinh/extractive-qa-mrc) using [XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html) as a pre-trained language model. By default, XLM-RoBERTa will split word in to sub-words. But in my implementation, I re-combine sub-words representation (after encoded by BERT layer) into word representation using sum strategy.
|
46 |
|