fix eoi_id token?
Hi!
are you planning to fix the tokenizer config (see https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct-GGUF/discussions/2 )? I don't know what the actual llama-3 repo does.
Hi!
are you planning to fix the tokenizer config (see https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct-GGUF/discussions/2 )? I don't know what the actual llama-3 repo does.
I read there are a few PRs in llama.cpp to adapt the way llama3 uses bpe tokens.
I just did this quick hacky fix I found I don't expect this is a real solution though.
Problem: Llama-3 uses 2 different stop tokens, but llama.cpp only has support for one.
The reason that the <|end_of-text|> did not work is a llama.cpp thing, I see some guys working to add support now.
So the config template is using the <|eot_id|> which is why modifying these will solve the endless generation... (before llama.cpp is fixed)
special_tokens_map.json value "eos_token" to "<|eot_id|>"
tokenizer_config.json value of "eos_token" to "<|eot_id|>"
Maybe this is irrelevant now... I did this before they updated the template today.
I think now the fix isn't a fix, and this is just a new model feature that llama.cpp doesn't handle.