Question on target-language quantization
Hey @bartowski I see you quantized this model :) I was considering building a custom imatrix (for the target languages) and quantizing this with my https://github.com/robbiemu/llama-gguf-optimize tool. I dont know how long it would take me to handle such a large model though, I dont have the hardware. Wondering if you might be interested in collaborating?
@robbiemu hey missed this one at the time, yeah i'm interested ! always looking for ways to improve the quants :)
I'm not positive there's as much room for gain as one may hope, i did some mild testing a few months back that I want to do more of that showed that performance is relatively universal even with none of the target language in the imatrix dataset, but like I said improvements are improvements, and if the dataset size doesn't go up I'm definitely interested in refining it