How could I run inference with the gguf models?

#6
by davideuler - opened

How could I inference with the gguf models? When I run the gguf models with llama.cpp executable, it shows:

gguf_init_from_file: invalid magic characters 'vers'
llama_model_load: error loading model: llama_model_loader: failed to load model from stable-code-3b-Q6_K.gguf

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'stable-code-3b-Q6_K.gguf'
main: error: unable to load model

Can use LangChain for the same , Here is a sample code in python :-

from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
import time
from googletrans import Translator

n_gpu_layers = 14  # Change this value based on your model and your GPU VRAM pool.
n_batch = 50  # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.

# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])

# Make sure the model path is correct for your system!
llm = LlamaCpp(
    model_path="models/stable-code-3b.gguf",
    n_gpu_layers=n_gpu_layers,
    n_batch=n_batch,
    max_new_tokens=512,
    callback_manager=callback_manager,
    verbose=True,  # Verbose is required to pass to the callback manager
)

# Prompt Template
template = """
    Write a python code for:
    {_prompt_} <|endoftext|>
    """

prompt = PromptTemplate(input_variables=["tweet"], template=template)

res = llm(prompt.format(tweet="How to connect to a sql database"))
print(res)

A Chinese boy and his father are playing football in the sunshine

How could I inference with the gguf models? When I run the gguf models with llama.cpp executable, it shows:

gguf_init_from_file: invalid magic characters 'vers'
llama_model_load: error loading model: llama_model_loader: failed to load model from stable-code-3b-Q6_K.gguf

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'stable-code-3b-Q6_K.gguf'
main: error: unable to load model

Huggingface has made an own inference c++ engine, it abuses the gguf format instead of changing the name. No one I know ever used that engine but it's there, causing incompatible files to be uploaded.
I'ver seen those type of gguf files a few times, it's extremely confusing to people.

If you want a GGUF file, then download it either from "TheBloke", he's converting and hosting them all here on Huggingface and those will all come with explanation and they should work.
Or you download the repository and use llama.cpp convert.py and quantize to create your own gguf.
If you intend to work a lot with a particular model that's recommended, as you can quickly regenerate any quantization or improvement you need.

How could I inference with the gguf models? When I run the gguf models with llama.cpp executable, it shows:

gguf_init_from_file: invalid magic characters 'vers'
llama_model_load: error loading model: llama_model_loader: failed to load model from stable-code-3b-Q6_K.gguf

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'stable-code-3b-Q6_K.gguf'
main: error: unable to load model

Huggingface has made an own inference c++ engine, it abuses the gguf format instead of changing the name. No one I know ever used that engine but it's there, causing incompatible files to be uploaded.
I'ver seen those type of gguf files a few times, it's extremely confusing to people.

If you want a GGUF file, then download it either from "TheBloke", he's converting and hosting them all here on Huggingface and those will all come with explanation and they should work.
Or you download the repository and use llama.cpp convert.py and quantize to create your own gguf.
If you intend to work a lot with a particular model that's recommended, as you can quickly regenerate any quantization or improvement you need.

Thanks for the detail explanation. I downloaded the gguf from TheBloke, it works.

davideuler changed discussion status to closed

@davideuler If you got,

gguf_init_from_file: invalid magic characters 'vers'

Then, you probably have a situation like this:

(.venv) [email protected]:~/LanguageLearning-Models/models$ curl https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/raw/main/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf
version https://git-lfs.github.com/spec/v1
oid sha256:9193684683657e90707087bd1ed19fd0b277ab66358d19edeadc26d6fdec4f53
size 26441533376

"vers" is "version", meaning you downloaded a github LFS file rather than the underlying file itself. You should still be able to download the actual model with the correct link.

@davideuler If you got,

gguf_init_from_file: invalid magic characters 'vers'

Then, you probably have a situation like this:

(.venv) [email protected]:~/LanguageLearning-Models/models$ curl https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/raw/main/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf
version https://git-lfs.github.com/spec/v1
oid sha256:9193684683657e90707087bd1ed19fd0b277ab66358d19edeadc26d6fdec4f53
size 26441533376

"vers" is "version", meaning you downloaded a github LFS file rather than the underlying file itself. You should still be able to download the actual model with the correct link.

Hi, thanks for the detail explanation. Got it to work.

Sign up or log in to comment