Usage: (you need to specify use_fast=False, trust_remote_code=True as the args of AutoTokenizer.from_pretrained())
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"llm-jp/hf-slow-tokenizer-v21b1",
legacy=True,
use_fast=False,
trust_remote_code=True,
)
When you create the tokenizer instance from the downloaded model with trust_remote_code=True, the model directory must be placed directly under the current directory, and the pretrained_model_name_or_path should not start with "./" but should end with "/".
tokenizer = AutoTokenizer.from_pretrained(
"hf-slow-tokenizer-v21b1/",
legacy=True,
use_fast=False,
trust_remote_code=True,
)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.