Fix eos_token in tokenizer_config.json

#3
by BM-TNG - opened

eos_token should likely be "<|endoftext|>" for the base model, to be consistent with config.json and generation_config.json.
Without this fix, text generation does not stop when using fill-in-the-middle prompts.

Great fix. But did Ali even run one single test case before publishing? They wrote thousands of words to say this model is on the top of the world, and with this so obvious bug.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment