Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
pretty_name: Multilingual Tokenizer Wikipedia Benchmark
|
4 |
dataset_info:
|
5 |
- config_name: af
|
@@ -1243,7 +1243,7 @@ language:
|
|
1243 |
- lv
|
1244 |
- mr
|
1245 |
- nl
|
1246 |
-
- no
|
1247 |
- pl
|
1248 |
- pt
|
1249 |
- ro
|
@@ -1263,10 +1263,10 @@ language:
|
|
1263 |
|
1264 |
# Multilingual Tokenizer Benchmark
|
1265 |
|
1266 |
-
This dataset includes pre-processed wikipedia data for tokenizer evaluation in 45 languages.
|
1267 |
|
1268 |
## Usage
|
1269 |
-
The dataset allows us to easily calculate tokenizer fertility and the proportion of continued words on any of the supported languages. In the example below we take the Mistral tokenizer and evaluate its performance on Slovak.
|
1270 |
|
1271 |
```python
|
1272 |
from transformers import AutoTokenizer
|
@@ -1294,4 +1294,7 @@ print('Prop. continued words:', df.cont_prop.mean())
|
|
1294 |
|
1295 |
## Dataset Creation
|
1296 |
|
1297 |
-
We loosely follow the approach of [Rust _et al.](https://arxiv.org/abs/2012.15613) using the fast [UDPipe](https://ufal.mff.cuni.cz/udpipe) to pre-split documents into words and subsequently run the tokenizer over isolated words. For all languages we use the respective November 2023 snapshot from [Wikipedia](wikimedia/wikipedia). Since Wikipedia, by nature, contains significantly more numbers and dates than other text and most tokenizers split those into single digits, we filtered all lone-standing numbers from the documents. Additionally, we removed any documents that still contained non-parsed HTML code (less than 1%).
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: mit
|
3 |
pretty_name: Multilingual Tokenizer Wikipedia Benchmark
|
4 |
dataset_info:
|
5 |
- config_name: af
|
|
|
1243 |
- lv
|
1244 |
- mr
|
1245 |
- nl
|
1246 |
+
- 'no'
|
1247 |
- pl
|
1248 |
- pt
|
1249 |
- ro
|
|
|
1263 |
|
1264 |
# Multilingual Tokenizer Benchmark
|
1265 |
|
1266 |
+
This dataset includes pre-processed wikipedia data for tokenizer evaluation in 45 languages. We provide more informatino on this evalaution task in [this blogpost](https://occiglot.github.io/occiglot/posts/eu_tokenizer_perfomance/).
|
1267 |
|
1268 |
## Usage
|
1269 |
+
The dataset allows us to easily calculate *tokenizer fertility* and the *proportion of continued words* on any of the supported languages. In the example below we take the Mistral tokenizer and evaluate its performance on Slovak.
|
1270 |
|
1271 |
```python
|
1272 |
from transformers import AutoTokenizer
|
|
|
1294 |
|
1295 |
## Dataset Creation
|
1296 |
|
1297 |
+
We loosely follow the approach of [Rust _et al.](https://arxiv.org/abs/2012.15613) using the fast [UDPipe](https://ufal.mff.cuni.cz/udpipe) to pre-split documents into words and subsequently run the tokenizer over isolated words. For all languages we use the respective November 2023 snapshot from [Wikipedia](wikimedia/wikipedia). Since Wikipedia, by nature, contains significantly more numbers and dates than other text and most tokenizers split those into single digits, we filtered all lone-standing numbers from the documents. Additionally, we removed any documents that still contained non-parsed HTML code (less than 1%).
|
1298 |
+
|
1299 |
+
## Licensing
|
1300 |
+
We release our curated benchmark and any associated code under [MIT](https://opensource.org/license/mit) license. However, depending on your use case, the licensing conditions of the original [Wikipedia data](https://huggingface.co/datasets/wikimedia/wikipedia#licensing-information) and [UDPipe](https://github.com/ufal/udpipe/tree/udpipe-2?tab=License-1-ov-file) may apply.
|