Update README.md
Browse files
README.md
CHANGED
@@ -1,13 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# SciBERT Longformer
|
2 |
|
3 |
-
This is a Lonformer version of the [SciBERT uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) model by Allen AI. The model is slower than SciBERT (~2.5x in my benchmarks) but can allow for 8x wider `max_seq_length` (4096 vs. 512) which is handy in case of working with long texts, e.g. scientific full texts.
|
4 |
|
5 |
The conversion to Longformer was performed with a [tutorial](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) by Allen AI: see a [Google Colab Notebook](https://colab.research.google.com/drive/1NPTnMkeAYOF2MWH3_uJYesuxxdOzxrFn?usp=sharing) by [Yury](https://yorko.github.io/) which closely follows the tutorial.
|
6 |
|
7 |
Note:
|
8 |
|
9 |
-
- no additional MLM pretraining of the Longformer was performed, the [collab notebook](https://colab.research.google.com/drive/1NPTnMkeAYOF2MWH3_uJYesuxxdOzxrFn?usp=sharing) stops at step 3, and step 4 is not done. The model can be improved with this additional MLM pretraining, better to do so with scientific texts, e.g. [
|
10 |
-
- no extensive benchmarks SciBERT Longformer vs. SciBERT were performed in terms of downstream task performance
|
11 |
|
12 |
Links:
|
13 |
- the original [SciBERT repo](https://github.com/allenai/scibert)
|
@@ -31,4 +36,4 @@ If using these models, please consider citing the following papers:
|
|
31 |
journal={arXiv:2004.05150},
|
32 |
year={2020},
|
33 |
}
|
34 |
-
```
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
---
|
6 |
# SciBERT Longformer
|
7 |
|
8 |
+
This is a Lonformer version of the [SciBERT uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) model by Allen AI. The model is slower than SciBERT (~2.5x in my benchmarks) but can allow for 8x wider `max_seq_length` (4096 vs. 512) which is handy in the case of working with long texts, e.g. scientific full texts.
|
9 |
|
10 |
The conversion to Longformer was performed with a [tutorial](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) by Allen AI: see a [Google Colab Notebook](https://colab.research.google.com/drive/1NPTnMkeAYOF2MWH3_uJYesuxxdOzxrFn?usp=sharing) by [Yury](https://yorko.github.io/) which closely follows the tutorial.
|
11 |
|
12 |
Note:
|
13 |
|
14 |
+
- no additional MLM pretraining of the Longformer was performed, the [collab notebook](https://colab.research.google.com/drive/1NPTnMkeAYOF2MWH3_uJYesuxxdOzxrFn?usp=sharing) stops at step 3, and step 4 is not done. The model can be improved with this additional MLM pretraining, better to do so with scientific texts, e.g. [S2ORC](https://github.com/allenai/s2orc), again by Allen AI.
|
15 |
+
- no extensive benchmarks of SciBERT Longformer vs. SciBERT were performed in terms of downstream task performance
|
16 |
|
17 |
Links:
|
18 |
- the original [SciBERT repo](https://github.com/allenai/scibert)
|
|
|
36 |
journal={arXiv:2004.05150},
|
37 |
year={2020},
|
38 |
}
|
39 |
+
```
|