Usage: (you need to specify use_fast=False, trust_remote_code=True as the args of AutoTokenizer.from_pretrained())

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(
    "llm-jp/hf-slow-tokenizer-v21b1",
    legacy=True,
    use_fast=False,
    trust_remote_code=True,
)

When you create the tokenizer instance from the downloaded model with trust_remote_code=True, the model directory must be placed directly under the current directory, and the pretrained_model_name_or_path should not start with "./" but should end with "/".

tokenizer = AutoTokenizer.from_pretrained(
    "hf-slow-tokenizer-v21b1/",
    legacy=True,
    use_fast=False,
    trust_remote_code=True,
)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support