strange messsage from using the model
2
#18 opened about 2 months ago
by
lucas202

Request: Add vLLM Support for This Model
5
#12 opened about 2 months ago
by
kira
Can you provide a FP8 version?
3
#11 opened about 2 months ago
by
xjpang85
Requesting Support for GGUF Quantization of MiniMax-Text-01 through llama.cpp
4
#1 opened 2 months ago
by
Doctor-Chad-PhD
