Edit model card

Converted version of gpt4all weights with ggjt magic for use in llama.cpp or pyllamacpp. Full credit goes to the GPT4All project.

Usage via pyllamacpp

Installation: pip install pyllamacpp

Download and inference:

from huggingface_hub import hf_hub_download
from pyllamacpp.model import Model

#Download the model
hf_hub_download(repo_id="LLukas22/gpt4all-lora-quantized-ggjt", filename="ggjt-model.bin", local_dir=".")

#Load the model
model = Model(ggml_model="ggjt-model.bin", n_ctx=2000)

#Generate
prompt="User: How are you doing?\nBot:"

result=model.generate(prompt,n_predict=50)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Spaces using LLukas22/gpt4all-lora-quantized-ggjt 21