Magic in here

#12
by BlueNipples - opened

Some occasional coherency issues, but there's sparkle within, signs of improved intelligence.

Wonder if you could do something similar with solar 11b, mistral 7b, or nemo, given I think these punch a little above llama-3's weight.

Arcee AI org

maybe the qwen2.5-14b?

maybe the qwen2.5-14b?

That would be fantastic, qwen2.5 14B is great for 24GB cards since it can run at Q8 with large context length, maybe you can do a Qwen2.5 72B distilled into qwen2.5 14B?

Sign up or log in to comment