Testing model
#1
by
dinchu
- opened
I have been working with the original WizardLM-13B-V1.2 i tried to switch to this one for performance but the responses it produces are very bad quality (even using the exact same prompts), the percentage of useless responses is too high with this version, quantization went wrong i guess.
i am using fastchat and vllm to load 2 models in parallel, might also be related to any of these facts too so i will keep testing
dinchu
changed discussion title from
This model is a bit useless
to Testing model
Hmm the problem might be that you are using the wrong prompt template?
It needs the wizardlm prompt template.
Without it, it would perform pretty badly.