# InternLM2-7B in LLaMA format with tokenizer fixed for llama.cpp

[#612](https://github.com/InternLM/InternLM/issues/612)