Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/allegro/herbert-klej-cased-tokenizer-v1/README.md
README.md
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: pl
|
3 |
+
---
|
4 |
+
|
5 |
+
# HerBERT tokenizer
|
6 |
+
|
7 |
+
**[HerBERT](https://en.wikipedia.org/wiki/Zbigniew_Herbert)** tokenizer is a character level byte-pair encoding with
|
8 |
+
vocabulary size of 50k tokens. The tokenizer was trained on [Wolne Lektury](https://wolnelektury.pl/) and a publicly available subset of
|
9 |
+
[National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=0) with [fastBPE](https://github.com/glample/fastBPE) library.
|
10 |
+
Tokenizer utilize `XLMTokenizer` implementation from [transformers](https://github.com/huggingface/transformers).
|
11 |
+
|
12 |
+
## Tokenizer usage
|
13 |
+
Herbert tokenizer should be used together with [HerBERT model](https://huggingface.co/allegro/herbert-klej-cased-v1):
|
14 |
+
```python
|
15 |
+
from transformers import XLMTokenizer, RobertaModel
|
16 |
+
|
17 |
+
tokenizer = XLMTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1")
|
18 |
+
model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1")
|
19 |
+
|
20 |
+
encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors='pt')
|
21 |
+
outputs = model(encoded_input)
|
22 |
+
```
|
23 |
+
|
24 |
+
## License
|
25 |
+
CC BY-SA 4.0
|
26 |
+
|
27 |
+
## Citation
|
28 |
+
If you use this tokenizer, please cite the following paper:
|
29 |
+
```
|
30 |
+
@misc{rybak2020klej,
|
31 |
+
title={KLEJ: Comprehensive Benchmark for Polish Language Understanding},
|
32 |
+
author={Piotr Rybak and Robert Mroczkowski and Janusz Tracz and Ireneusz Gawlik},
|
33 |
+
year={2020},
|
34 |
+
eprint={2005.00630},
|
35 |
+
archivePrefix={arXiv},
|
36 |
+
primaryClass={cs.CL}
|
37 |
+
}
|
38 |
+
```
|
39 |
+
Paper is accepted at ACL 2020, as soon as proceedings appear, we will update the BibTeX.
|
40 |
+
|
41 |
+
## Authors
|
42 |
+
Tokenizer was created by **Allegro Machine Learning Research** team.
|
43 |
+
|
44 |
+
You can contact us at: <a href="mailto:[email protected]">[email protected]</a>
|