roberta-large-japanese-char-luw-upos / tokenizer_config.json
KoichiYasuoka's picture
model improved
89aa265
raw
history blame
348 Bytes
{"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": false, "never_split": ["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"], "model_max_length": 512, "do_basic_tokenize": true, "tokenizer_class": "BertTokenizerFast"}