../llama.cpp/convert_hf_to_gguf.py . --outfile minerva.gguff --outtype f16 : The BPE pre-tokenizer was not recognized!
#2
by
Raphy10-Collab
- opened
I downloaded Minerva-7B-instruct-v1.0
:
(.venv) raphy@raohy:~/whisper.cpp/models$ cat download-minerva.py
from huggingface_hub import snapshot_download
model_id="sapienzanlp/Minerva-7B-instruct-v1.0"
snapshot_download(repo_id=model_id, local_dir="minerva",
local_dir_use_symlinks=False, revision="main")
And tried to convert it to gguff
(.venv) raphy@raohy:~/whisper.cpp/models/minerva$ ../llama.cpp/convert_hf_to_gguf.py . --outfile minerva.gguff --outtype f16
But got this error :
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-05
INFO:hf-to-gguf:gguf: file type = 1
INFO:hf-to-gguf:Set model tokenizer
WARNING:hf-to-gguf:
WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:** WARNING: The BPE pre-tokenizer was not recognized!
WARNING:hf-to-gguf:** There are 2 possible reasons for this:
WARNING:hf-to-gguf:** - the model has not been added to convert_hf_to_gguf_update.py yet
WARNING:hf-to-gguf:** - the pre-tokenization config has changed upstream
WARNING:hf-to-gguf:** Check your model files and convert_hf_to_gguf_update.py and update them accordingly.
WARNING:hf-to-gguf:** ref: https://github.com/ggerganov/llama.cpp/pull/6920
WARNING:hf-to-gguf:**
WARNING:hf-to-gguf:** chkhsh: 68fa7e0a33050885cc10a2acfa4df354042188f0afa03b809f7a71c4cde6e373
WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:
Traceback (most recent call last):
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 1566, in set_vocab
self._set_vocab_sentencepiece()
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 789, in _set_vocab_sentencepiece
tokens, scores, toktypes = self._create_vocab_sentencepiece()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 806, in _create_vocab_sentencepiece
raise FileNotFoundError(f"File not found: {tokenizer_path}")
FileNotFoundError: File not found: tokenizer.model
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 1569, in set_vocab
self._set_vocab_llama_hf()
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 881, in _set_vocab_llama_hf
vocab = gguf.LlamaHfVocab(self.dir_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/gguf-py/gguf/vocab.py", line 390, in __init__
raise FileNotFoundError('Cannot find Llama BPE tokenizer')
FileNotFoundError: Cannot find Llama BPE tokenizer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 5077, in <module>
main()
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 5071, in main
model_instance.write()
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 440, in write
self.prepare_metadata(vocab_only=False)
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 433, in prepare_metadata
self.set_vocab()
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 1572, in set_vocab
self._set_vocab_gpt2()
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 725, in _set_vocab_gpt2
tokens, toktypes, tokpre = self.get_vocab_base()
^^^^^^^^^^^^^^^^^^^^^
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 526, in get_vocab_base
tokpre = self.get_vocab_base_pre(tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raphy/whisper.cpp/models/minerva/../llama.cpp/convert_hf_to_gguf.py", line 713, in get_vocab_base_pre
raise NotImplementedError("BPE pre-tokenizer was not recognized - update get_vocab_base_pre()")
NotImplementedError: BPE pre-tokenizer was not recognized - update get_vocab_base_pre()
How to make it work?
Hi! as first try to use the latest version of llama.cpp, if you are using an old version. Let us know if it solve the problem.
Hi!
I git cloned, compiled and built, the latest version of llama.cpp and used the just compiled binary files. But the problem persists :
raphy@raohy:~$ git clone --recurse-submodules https://github.com/ggerganov/llama.cpp
Cloning into 'llama.cpp'..
raphy@raohy:~/llama.cpp$ cmake -B builddir
-- The C compiler identification is GNU 13.3.0
-- The CXX compiler identification is GNU 14.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.43.0")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF.
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- Including CPU backend
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- x86 detected
-- Adding CPU backend variant ggml-cpu: -march=native
-- Configuring done (0.7s)
-- Generating done (0.1s)
-- Build files have been written to: /home/raphy/llama.cpp/builddir
raphy@raohy:~/llama.cpp$ cmake --build builddir/ --config Release
[ 1%] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o
[ 1%] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o
[ 99%] Building CXX object pocs/vdot/CMakeFiles/llama-q8dot.dir/q8dot.cpp.o
[100%] Linking CXX executable ../../bin/llama-q8dot
[100%] Built target llama-q8dot
(.venv) raphy@raohy:~/whisper.cpp/models/models--sapienzanlp--Minerva-7B-instruct-v1.0$ ../../../llama.cpp/convert_hf_to_gguf.py . --outfile minerva.gguff --outtype f16
INFO:hf-to-gguf:Loading model:
Traceback (most recent call last):
File "/home/raphy/whisper.cpp/models/models--sapienzanlp--Minerva-7B-instruct-v1.0/../../../llama.cpp/convert_hf_to_gguf.py", line 5140, in <module>
main()
File "/home/raphy/whisper.cpp/models/models--sapienzanlp--Minerva-7B-instruct-v1.0/../../../llama.cpp/convert_hf_to_gguf.py", line 5108, in main
hparams = Model.load_hparams(dir_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raphy/whisper.cpp/models/models--sapienzanlp--Minerva-7B-instruct-v1.0/../../../llama.cpp/convert_hf_to_gguf.py", line 468, in load_hparams
with open(dir_model / "config.json", "r", encoding="utf-8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'config.json'