Q4_K_S GGUF for https://huggingface.co/MarsupialAI/Dumbstral-169B

No imat, no other quant schemes. This is all I'm willing to do for a model that nobody can reasonably run. FSM help Bartowski and Mradermacher if they choose to run full quant sets for this bastard.

Downloads last month
8
GGUF
Model size
170B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support