Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
mbrack commited on
Commit
958418d
1 Parent(s): 3c24ce7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -1213,4 +1213,85 @@ configs:
1213
  data_files:
1214
  - split: train
1215
  path: vi/train-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1216
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1213
  data_files:
1214
  - split: train
1215
  path: vi/train-*
1216
+ language:
1217
+ - af
1218
+ - ar
1219
+ - bg
1220
+ - ca
1221
+ - cs
1222
+ - da
1223
+ - de
1224
+ - el
1225
+ - en
1226
+ - es
1227
+ - et
1228
+ - eu
1229
+ - fa
1230
+ - fi
1231
+ - fr
1232
+ - ga
1233
+ - he
1234
+ - hi
1235
+ - hr
1236
+ - hu
1237
+ - hy
1238
+ - id
1239
+ - it
1240
+ - ja
1241
+ - ko
1242
+ - lt
1243
+ - lv
1244
+ - mr
1245
+ - nl
1246
+ - no
1247
+ - pl
1248
+ - pt
1249
+ - ro
1250
+ - ru
1251
+ - sa
1252
+ - sk
1253
+ - sl
1254
+ - sr
1255
+ - sv
1256
+ - ta
1257
+ - te
1258
+ - tr
1259
+ - uk
1260
+ - ur
1261
+ - vi
1262
  ---
1263
+
1264
+ # Multilingual Tokenizer Benchmark
1265
+
1266
+ This dataset includes pre-processed wikipedia data for tokenizer evaluation in 45 languages.
1267
+
1268
+ ## Usage
1269
+ The dataset allows us to easily calculate tokenizer fertility and the proportion of continued words on any of the supported languages. In the example below we take the Mistral tokenizer and evaluate its performance on Slovak.
1270
+
1271
+ ```python
1272
+ from transformers import AutoTokenizer
1273
+ from datasets import load_dataset
1274
+ import numpy as np
1275
+
1276
+ def calculate_metrics(tokens):
1277
+ tmp = np.array([len(y) for y in tokens])
1278
+ return {'fertility': np.mean(tmp), 'cont_prop': np.count_nonzero(tmp > 1) / tmp.shape[0]}
1279
+
1280
+ tokenizer_name = 'mistralai/Mistral-7B-v0.1'
1281
+ language = 'sk' #Slovak
1282
+ tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
1283
+ ds = load_dataset('occiglot/tokenizer-wiki-bench', name=language, split='clean')
1284
+
1285
+ remove_columns = list(set(ds.column_names) - set(["text"]))
1286
+ ds = ds.map(lambda x: {'tokens': tokenizer(x['split_text'], add_special_tokens=False)['input_ids']} ,num_proc=256, remove_columns=remove_columns, batched=False)
1287
+ remove_columns = None#list(set(ds.column_names))
1288
+ ds = ds.map(lambda x: calculate_metrics(x['tokens']), num_proc=256, remove_columns=remove_columns, batched=False)
1289
+ df = ds.to_pandas()
1290
+
1291
+ print('Fertility: ', df.fertility.mean())
1292
+ print('Prop. continued words:', df.cont_prop.mean())
1293
+ ```
1294
+
1295
+ ## Dataset Creation
1296
+
1297
+ We loosely follow the approach of [Rust _et al.](https://arxiv.org/abs/2012.15613) using the fast [UDPipe](https://ufal.mff.cuni.cz/udpipe) to pre-split documents into words and subsequently run the tokenizer over isolated words. For all languages we use the respective November 2023 snapshot from [Wikipedia](wikimedia/wikipedia). Since Wikipedia, by nature, contains significantly more numbers and dates than other text and most tokenizers split those into single digits, we filtered all lone-standing numbers from the documents. Additionally, we removed any documents that still contained non-parsed HTML code (less than 1%).