Continue from where you left off is not working
#21 opened 4 days ago
by
SpaceStar314
base model
#20 opened 7 days ago
by
ehartford
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63111b2d88942700629f5771/u2a9y-yx6TG0N31OhMSHI.png)
Can you upload your model to Ollama? I hope to use your model to run RAGflow. However, if I need to run it locally, it must be compatible with Ollama.
#19 opened 8 days ago
by
shaddock
strange messsage from using the model
#18 opened 11 days ago
by
lucas202
![](https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ASRASUD5CMO97ili6p1uv.png)
is it possible to build a rag using this model
1
#17 opened 11 days ago
by
lucas202
![](https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ASRASUD5CMO97ili6p1uv.png)
model weight
1
#16 opened 15 days ago
by
kdaeho27
Lenght Ouput ?
1
#15 opened 17 days ago
by
Brabuslevrai
Are there plans to release the lightning attention kernel?
2
#14 opened 17 days ago
by
bongchoi
In modeling_minimax_text_01.py attention mask is not passed correctly to MiniMaxText01FlashAttention2::forward() method
1
#13 opened 18 days ago
by
sszymczyk
Request: Add vLLM Support for This Model
3
#12 opened 20 days ago
by
kira
Can you provide a FP8 version?
2
#11 opened 20 days ago
by
xjpang85
Smaller versions (like 20b and 14b)
1
#10 opened 20 days ago
by
win10
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1678188568629-noauth.png)
Please fire your human evaluators
8
#6 opened 22 days ago
by
ChuckMcSneed
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4VOzArmrRaX_DUTxGmm59.jpeg)
Consider making Minimax Text free software, as license is proprietary
4
#2 opened 23 days ago
by
JLouisBiz
![](https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/EolfJfjW25hC4Bt_hCPq8.png)
Requesting Support for GGUF Quantization of MiniMax-Text-01 through llama.cpp
4
#1 opened 23 days ago
by
Doctor-Chad-PhD
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6776855be57a4c8f9e6e7aaf/rPvn5Og7NX7PXk1d1mnXP.jpeg)