This is polish fast tokenizer.

Number of documents used to train tokenizer:

  • 25 088 398

Sample usge with transformers:

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('radlab/polish-fast-tokenizer')
tokenizer.decode(tokenizer("Ala ma kota i psa").input_ids)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Datasets used to train radlab/polish-fast-tokenizer