Please Quantize MiniMaxAI/MiniMax-VL-01
#1
by
chilegazelle
- opened
Hey everyone working on quantization!
Big thanks for all your work—your contributions to AI optimization are seriously appreciated.
Right now, MiniMaxAI/MiniMax-VL-01 is the best VL model out there, and a quantized version could take it even further. It would make the model more efficient, reduce compute costs, and make it more accessible for everyone. If anyone is up for it, that would be amazing!
Huge thanks in advance!
If possible, it would be great to have a diverse range of quantized versions—optimized for different hardware, precision levels, and use cases. This way, more people can benefit from it, whether they're running it on consumer GPUs, cloud servers, or edge devices.
chilegazelle
changed discussion status to
closed