Model claims to successfully load but no text output.
Model loads, uses CPU and RAM but refuses to communicate. Command line looks like this:
14:58:32-964973 INFO LOADER: "llama.cpp"
14:58:32-965973 INFO TRUNCATION LENGTH: 4096
14:58:32-967973 INFO INSTRUCTION TEMPLATE: "Alpaca"
14:58:32-968974 INFO Loaded the model in 27.87 seconds.
14:59:19-246873 INFO Saved "C:\text-generation-webui-main\text-generation-webui-main\characters\Sophie9.yaml".
Output generated in 126.43 seconds (0.03 tokens/s, 4 tokens, context 609, seed 142011784)
Llama.generate: prefix-match hit
Output generated in 402.52 seconds (0.03 tokens/s, 14 tokens, context 363, seed 46178664)
Llama.generate: prefix-match hit
Output generated in 615.84 seconds (0.03 tokens/s, 20 tokens, context 364, seed 402322849)
Llama.generate: prefix-match hit
I just tried the IQ3_XS version, which seems to work fine. Even if the model were broken, it cannot really result in no output, so this a problem with your setup or usage. Maybe you downloaded a quant that your text-generation-webui version does not support? I don't know anything about that tool, so I can't help with it.
It came down to user error, mea culpa.
Typos when merging the part 1/part 2 files will indeed do that
Good to hear :)