Having Issues Loading in Runpod
Hey, I'm having issues loading this model in Runpod. I'm using your template and settings that were mentioned in the instructions. Not sure what the issue is. Here is the error it gives me:
Traceback (most recent call last): File “/root/text-generation-webui/server.py”, line 62, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name, loader) File “/root/text-generation-webui/modules/models.py”, line 73, in load_model tokenizer = load_tokenizer(model_name, model) File “/root/text-generation-webui/modules/models.py”, line 98, in load_tokenizer tokenizer = LlamaTokenizer.from_pretrained(Path(f"{shared.args.model_dir}/{model_name}/"), clean_up_tokenization_spaces=True) File “/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py”, line 1796, in from_pretrained raise EnvironmentError( OSError: Can’t load tokenizer for ‘models/TheBloke_guanaco-65B-GPTQ’. If you were trying to load it from ‘https://huggingface.co/models’, make sure you don’t have a local directory with the same name. Otherwise, make sure ‘models/TheBloke_guanaco-65B-GPTQ’ is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.
It looks like you're missing files on the download. Sometimes there are errors during a download and it aborts part way through, and text-generation-webui's downloader doesn't implement any retrying
Try downloading the model again. It won't re-download anything you already have, just finish off any incomplete files and add any missing files.
Thanks for the tip, running the download again solved the issue. For some reason I have to keep doing that with the models I download on Runpod, but at least it works.
This is my issue too. I suspect the internet connection may be creating download errors.