xu-song's picture
add more tokenizers
f4973d4
raw
history blame
126 Bytes
from vocab.gpt_35_turbo import tokenizer
print(tokenizer.decode([100256]))
print(tokenizer.convert_ids_to_tokens([100256]))