Datasets:
language:
- af
- ar
- bg
- ca
- cs
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- ga
- he
- hi
- hr
- hu
- hy
- id
- it
- ja
- ko
- lt
- lv
- mr
- nl
- 'no'
- pl
- pt
- ro
- ru
- sa
- sk
- sl
- sr
- sv
- ta
- te
- tr
- uk
- ur
- vi
license: mit
pretty_name: Multilingual Tokenizer Wikipedia Benchmark
dataset_info:
- config_name: af
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 541481060
num_examples: 112518
- name: clean
num_bytes: 539551289.6071739
num_examples: 112117
download_size: 441191361
dataset_size: 1081032349.607174
- config_name: ar
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 7007645793
num_examples: 1219201
- name: clean
num_bytes: 6980694657.688122
num_examples: 1214512
download_size: 4415559180
dataset_size: 13988340450.688122
- config_name: bg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 2437923560
num_examples: 294275
- name: clean
num_bytes: 2433855866.6248918
num_examples: 293784
download_size: 1805069655
dataset_size: 4871779426.624891
- config_name: ca
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 4801022979
num_examples: 737409
- name: clean
num_bytes: 4766991732.959834
num_examples: 732182
download_size: 3884482903
dataset_size: 9568014711.959835
- config_name: cs
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 3740905267
num_examples: 534044
- name: clean
num_bytes: 3730243864.91258
num_examples: 532522
download_size: 3671037924
dataset_size: 7471149131.9125805
- config_name: da
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 1298277678
num_examples: 295347
- name: clean
num_bytes: 1292602738.074089
num_examples: 294056
download_size: 1782396281
dataset_size: 2590880416.074089
- config_name: de
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 23086869184
num_examples: 2845308
- name: clean
num_bytes: 23073148386.18474
num_examples: 2843617
download_size: 21942020975
dataset_size: 46160017570.18474
- config_name: el
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 3002968703
num_examples: 226834
- name: clean
num_bytes: 2973684879.714972
num_examples: 224622
download_size: 2295250961
dataset_size: 5976653582.714972
- config_name: en
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 49746869820
num_examples: 6407814
- name: clean
num_bytes: 49560903666.851944
num_examples: 6383860
download_size: 40592018321
dataset_size: 99307773486.85194
- config_name: es
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 14759846818
num_examples: 1841155
- name: clean
num_bytes: 14536992695.618353
num_examples: 1813356
download_size: 12175892555
dataset_size: 29296839513.618355
- config_name: et
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 1019050491
num_examples: 240397
- name: clean
num_bytes: 1016723262.6254404
num_examples: 239848
download_size: 1019164563
dataset_size: 2035773753.6254404
- config_name: eu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 1291195010
num_examples: 416347
- name: clean
num_bytes: 1265327506.262949
num_examples: 408006
download_size: 968840915
dataset_size: 2556522516.262949
- config_name: fa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 4224898253
num_examples: 979869
- name: clean
num_bytes: 4213433450.6083264
num_examples: 977210
download_size: 2499698548
dataset_size: 8438331703.608326
- config_name: fi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 2600737260
num_examples: 561598
- name: clean
num_bytes: 2595874753.1481237
num_examples: 560548
download_size: 2642007766
dataset_size: 5196612013.148124
- config_name: fr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 20069732840
num_examples: 2564646
- name: clean
num_bytes: 19942544382.860683
num_examples: 2548393
download_size: 16151551755
dataset_size: 40012277222.86069
- config_name: ga
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 142209710
num_examples: 59156
- name: clean
num_bytes: 141702470.68682805
num_examples: 58945
download_size: 121745838
dataset_size: 283912180.686828
- config_name: he
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 4776226234
num_examples: 333874
- name: clean
num_bytes: 4760232712.702708
num_examples: 332756
download_size: 3499530576
dataset_size: 9536458946.70271
- config_name: hi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 1452853579
num_examples: 163093
- name: clean
num_bytes: 1443152625.8779714
num_examples: 162004
download_size: 785363639
dataset_size: 2896006204.8779716
- config_name: hr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 1064630680
num_examples: 202848
- name: clean
num_bytes: 1053026432.3195693
num_examples: 200637
download_size: 1028743775
dataset_size: 2117657112.3195693
- config_name: hu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 3533169653
num_examples: 532427
- name: clean
num_bytes: 3510335279.8822336
num_examples: 528986
download_size: 3558613373
dataset_size: 7043504932.882234
- config_name: hy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 2568868378
num_examples: 303036
- name: clean
num_bytes: 2555898405.394963
num_examples: 301506
download_size: 1781142597
dataset_size: 5124766783.394962
- config_name: id
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 2650288629
num_examples: 665622
- name: clean
num_bytes: 2630666948.280745
num_examples: 660694
download_size: 2040186206
dataset_size: 5280955577.280745
- config_name: it
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 12188918391
num_examples: 1833639
- name: clean
num_bytes: 12163279397.591763
num_examples: 1829782
download_size: 10368836428
dataset_size: 24352197788.591763
- config_name: ja
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 18752888787
num_examples: 1389467
- name: clean
num_bytes: 18684866617.717476
num_examples: 1384427
download_size: 15232900753
dataset_size: 37437755404.717476
- config_name: ko
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 3160932689
num_examples: 647897
- name: clean
num_bytes: 3151741108.878351
num_examples: 646013
download_size: 3074385022
dataset_size: 6312673797.878351
- config_name: lt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 781319902
num_examples: 211292
- name: clean
num_bytes: 777474168.616436
num_examples: 210252
download_size: 722780874
dataset_size: 1558794070.616436
- config_name: lv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 531956241
num_examples: 123413
- name: clean
num_bytes: 530943303.00615007
num_examples: 123178
download_size: 700342420
dataset_size: 1062899544.00615
- config_name: mr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 547060763
num_examples: 94133
- name: clean
num_bytes: 545450957.3914355
num_examples: 93856
download_size: 278141890
dataset_size: 1092511720.3914356
- config_name: nl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 6191062892
num_examples: 2135977
- name: clean
num_bytes: 6177393712.697661
num_examples: 2131261
download_size: 5179824678
dataset_size: 12368456604.697662
- config_name: 'no'
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 2437191515
num_examples: 617937
- name: clean
num_bytes: 2428893175.610127
num_examples: 615833
download_size: 2175299531
dataset_size: 4866084690.6101265
- config_name: pl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 6864626419
num_examples: 1587721
- name: clean
num_bytes: 6861024883.335341
num_examples: 1586888
download_size: 6565864124
dataset_size: 13725651302.335342
- config_name: pt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 6844185526
num_examples: 1112246
- name: clean
num_bytes: 6755821527.2502985
num_examples: 1097886
download_size: 5516209748
dataset_size: 13600007053.250298
- config_name: ro
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 2023493174
num_examples: 442389
- name: clean
num_bytes: 2006866635.6197736
num_examples: 438754
download_size: 1652633599
dataset_size: 4030359809.619774
- config_name: ru
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 22550679128
num_examples: 1945063
- name: clean
num_bytes: 22439204702.844765
num_examples: 1935448
download_size: 18884603758
dataset_size: 44989883830.844765
- config_name: sa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 146460109
num_examples: 12156
- name: clean
num_bytes: 145435996.68797302
num_examples: 12071
download_size: 95836795
dataset_size: 291896105.687973
- config_name: sk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 977962245
num_examples: 242235
- name: clean
num_bytes: 976048590.4738994
num_examples: 241761
download_size: 1346611201
dataset_size: 1954010835.4738994
- config_name: sl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 1106532891
num_examples: 183006
- name: clean
num_bytes: 1097995332.4385757
num_examples: 181594
download_size: 1006028852
dataset_size: 2204528223.4385757
- config_name: sr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 3755288114
num_examples: 676605
- name: clean
num_bytes: 3735557179.0449376
num_examples: 673050
download_size: 2558022832
dataset_size: 7490845293.044937
- config_name: sv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 4861956987
num_examples: 2574513
- name: clean
num_bytes: 4857071448.365948
num_examples: 2571926
download_size: 3512612936
dataset_size: 9719028435.365948
- config_name: ta
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 1693909025
num_examples: 160651
- name: clean
num_bytes: 1682405487.85255
num_examples: 159560
download_size: 985318775
dataset_size: 3376314512.85255
- config_name: te
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 1556095028
num_examples: 87854
- name: clean
num_bytes: 1550320823.3066678
num_examples: 87528
download_size: 746686495
dataset_size: 3106415851.306668
- config_name: tr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 2317236022
num_examples: 534988
- name: clean
num_bytes: 2301578085.336879
num_examples: 531373
download_size: 2055444454
dataset_size: 4618814107.336879
- config_name: uk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 10935662610
num_examples: 1294720
- name: clean
num_bytes: 10860532296.947023
num_examples: 1285825
download_size: 8344390939
dataset_size: 21796194906.94702
- config_name: ur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 918249794
num_examples: 200154
- name: clean
num_bytes: 912616078.225986
num_examples: 198926
download_size: 534834968
dataset_size: 1830865872.225986
- config_name: vi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 3685585608
num_examples: 1288680
- name: clean
num_bytes: 3669872935.086358
num_examples: 1283186
download_size: 2646807342
dataset_size: 7355458543.086358
- config_name: zh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 7820979602
num_examples: 1384748
- name: clean
num_bytes: 7781957954.689285
num_examples: 1377839
download_size: 6540517932
dataset_size: 15602937556.689285
configs:
- config_name: af
data_files:
- split: train
path: af/train-*
- split: clean
path: af/clean-*
- config_name: ar
data_files:
- split: train
path: ar/train-*
- split: clean
path: ar/clean-*
- config_name: bg
data_files:
- split: train
path: bg/train-*
- split: clean
path: bg/clean-*
- config_name: ca
data_files:
- split: train
path: ca/train-*
- split: clean
path: ca/clean-*
- config_name: cs
data_files:
- split: train
path: cs/train-*
- split: clean
path: cs/clean-*
- config_name: da
data_files:
- split: train
path: da/train-*
- split: clean
path: da/clean-*
- config_name: de
data_files:
- split: train
path: de/train-*
- split: clean
path: de/clean-*
- config_name: el
data_files:
- split: train
path: el/train-*
- split: clean
path: el/clean-*
- config_name: en
data_files:
- split: train
path: en/train-*
- split: clean
path: en/clean-*
- config_name: es
data_files:
- split: train
path: es/train-*
- split: clean
path: es/clean-*
- config_name: et
data_files:
- split: train
path: et/train-*
- split: clean
path: et/clean-*
- config_name: eu
data_files:
- split: train
path: eu/train-*
- split: clean
path: eu/clean-*
- config_name: fa
data_files:
- split: train
path: fa/train-*
- split: clean
path: fa/clean-*
- config_name: fi
data_files:
- split: train
path: fi/train-*
- split: clean
path: fi/clean-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
- split: clean
path: fr/clean-*
- config_name: ga
data_files:
- split: train
path: ga/train-*
- split: clean
path: ga/clean-*
- config_name: he
data_files:
- split: train
path: he/train-*
- split: clean
path: he/clean-*
- config_name: hi
data_files:
- split: train
path: hi/train-*
- split: clean
path: hi/clean-*
- config_name: hr
data_files:
- split: train
path: hr/train-*
- split: clean
path: hr/clean-*
- config_name: hu
data_files:
- split: train
path: hu/train-*
- split: clean
path: hu/clean-*
- config_name: hy
data_files:
- split: train
path: hy/train-*
- split: clean
path: hy/clean-*
- config_name: id
data_files:
- split: train
path: id/train-*
- split: clean
path: id/clean-*
- config_name: it
data_files:
- split: train
path: it/train-*
- split: clean
path: it/clean-*
- config_name: ja
data_files:
- split: train
path: ja/train-*
- split: clean
path: ja/clean-*
- config_name: ko
data_files:
- split: train
path: ko/train-*
- split: clean
path: ko/clean-*
- config_name: lt
data_files:
- split: train
path: lt/train-*
- split: clean
path: lt/clean-*
- config_name: lv
data_files:
- split: train
path: lv/train-*
- split: clean
path: lv/clean-*
- config_name: mr
data_files:
- split: train
path: mr/train-*
- split: clean
path: mr/clean-*
- config_name: nl
data_files:
- split: train
path: nl/train-*
- split: clean
path: nl/clean-*
- config_name: 'no'
data_files:
- split: train
path: no/train-*
- split: clean
path: no/clean-*
- config_name: pl
data_files:
- split: train
path: pl/train-*
- split: clean
path: pl/clean-*
- config_name: pt
data_files:
- split: train
path: pt/train-*
- split: clean
path: pt/clean-*
- config_name: ro
data_files:
- split: train
path: ro/train-*
- split: clean
path: ro/clean-*
- config_name: ru
data_files:
- split: train
path: ru/train-*
- split: clean
path: ru/clean-*
- config_name: sa
data_files:
- split: train
path: sa/train-*
- split: clean
path: sa/clean-*
- config_name: sk
data_files:
- split: train
path: sk/train-*
- split: clean
path: sk/clean-*
- config_name: sl
data_files:
- split: train
path: sl/train-*
- split: clean
path: sl/clean-*
- config_name: sr
data_files:
- split: train
path: sr/train-*
- split: clean
path: sr/clean-*
- config_name: sv
data_files:
- split: train
path: sv/train-*
- split: clean
path: sv/clean-*
- config_name: ta
data_files:
- split: train
path: ta/train-*
- split: clean
path: ta/clean-*
- config_name: te
data_files:
- split: train
path: te/train-*
- split: clean
path: te/clean-*
- config_name: tr
data_files:
- split: train
path: tr/train-*
- split: clean
path: tr/clean-*
- config_name: uk
data_files:
- split: train
path: uk/train-*
- split: clean
path: uk/clean-*
- config_name: ur
data_files:
- split: train
path: ur/train-*
- split: clean
path: ur/clean-*
- config_name: vi
data_files:
- split: train
path: vi/train-*
- split: clean
path: vi/clean-*
- config_name: zh
data_files:
- split: train
path: zh/train-*
- split: clean
path: zh/clean-*
Multilingual Tokenizer Benchmark
This dataset includes pre-processed wikipedia data for tokenizer evaluation in 45 languages. We provide more information on the evaluation task in general this blogpost.
Usage
The dataset allows us to easily calculate tokenizer fertility and the proportion of continued words on any of the supported languages. In the example below we take the Mistral tokenizer and evaluate its performance on Slovak.
from transformers import AutoTokenizer
from datasets import load_dataset
import numpy as np
def calculate_metrics(tokens):
tmp = np.array([len(y) for y in tokens])
return {'fertility': np.mean(tmp), 'cont_prop': np.count_nonzero(tmp > 1) / tmp.shape[0]}
tokenizer_name = 'mistralai/Mistral-7B-v0.1'
language = 'sk' #Slovak
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
ds = load_dataset('occiglot/tokenizer-wiki-bench', name=language, split='clean')
remove_columns = list(set(ds.column_names) - set(["text"]))
ds = ds.map(lambda x: {'tokens': tokenizer(x['split_text'], add_special_tokens=False)['input_ids']} ,num_proc=256, remove_columns=remove_columns, batched=False)
remove_columns = None#list(set(ds.column_names))
ds = ds.map(lambda x: calculate_metrics(x['tokens']), num_proc=256, remove_columns=remove_columns, batched=False)
df = ds.to_pandas()
print('Fertility: ', df.fertility.mean())
print('Prop. continued words:', df.cont_prop.mean())
Dataset Creation
We loosely follow the approach of Rust _et al. using the fast UDPipe to pre-split documents into words and subsequently run the tokenizer over isolated words. For all languages we use the respective November 2023 snapshot from Wikipedia. Since Wikipedia, by nature, contains significantly more numbers and dates than other text and most tokenizers split those into single digits, we filtered all lone-standing numbers from the documents. Additionally, we removed any documents that still contained non-parsed HTML code (less than 1%).
Licensing
We release our curated benchmark and any associated code under MIT license. However, depending on your use case, the licensing conditions of the original Wikipedia data and UDPipe may apply.
Supported Languages
This dataset currently contains pre-processed data for the following languages:
Language | Code |
---|---|
Afrikaans | af |
Arabic | ar |
Armenian | hy |
Basque | eu |
Bulgarian | bg |
Catalan | ca |
Croatian | hr |
Czech | cs |
Danish | da |
Dutch | nl |
English | en |
Estonian | et |
Finnish | fi |
French | fr |
German | de |
Greek | el |
Hebrew | he |
Hindi | hi |
Hungarian | hu |
Indonesian | id |
Irish | ga |
Italian | it |
Japanese | ja |
Korean | ko |
Latvian | lv |
Lithuanian | lt |
Marathi | mr |
Norwegian | no |
Persian | fa |
Polish | pl |
Portuguese | pt |
Romanian | ro |
Russian | ru |
Sanskrit | sa |
Serbian | sr |
Slovak | sk |
Slovenian | sl |
Spanish | es |
Swedish | sv |
Tamil | ta |
Telugu | te |
Turkish | tr |
Ukrainian | uk |
Urdu | ur |
Vietnamese | vi |