BenevolenceMessiah/Qwen2.5-Coder-7B-Chat-8x-MoE

#320
by BenevolenceMessiah - opened

Thanks in advance!

It's queued, but please provide aurl next time, thanks :)

Sorry, I'll be sure to do that next time.
Thanks again!

unfortunately, llama.cpp crashes:

ggml.c:22425: GGML_ASSERT(info->ne[i] > 0) failed

This is most commonly caused by faulty weights, but could be a bug or missing support in llama.cpp (e.g. for qwen2.5 moe or the specific moe config).

mradermacher changed discussion status to closed

Sign up or log in to comment