-
-
-
-
-
-
Inference Providers
Active filters:
3-bit
0xSero/DeepSeek-V3.2-REAP-345B-W3A16
Text Generation
•
2B
•
Updated
•
275
•
7
MaziyarPanahi/gemma-7b-GGUF
Text Generation
•
9B
•
Updated
•
1.33k
•
15
MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF
Text Generation
•
7B
•
Updated
•
137k
•
131
MaziyarPanahi/gemma-3-4b-it-GGUF
Text Generation
•
4B
•
Updated
•
166k
•
16
MaziyarPanahi/Nemotron-Orchestrator-8B-GGUF
Text Generation
•
8B
•
Updated
•
58.3k
•
4
mlx-community/GLM-4.7-REAP-50-mixed-3-4-bits
Text Generation
•
185B
•
Updated
•
487
•
2
MaziyarPanahi/BASH-Coder-Mistral-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF
Text Generation
•
7B
•
Updated
•
274
•
6
MaziyarPanahi/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF
Text Generation
•
7B
•
Updated
•
704
•
5
MaziyarPanahi/Yi-Coder-1.5B-Chat-GGUF
Text Generation
•
1B
•
Updated
•
121k
•
17
MaziyarPanahi/Qwen2.5-Coder-7B-Instruct-abliterated-GGUF
Text Generation
•
8B
•
Updated
•
44
•
2
MaziyarPanahi/gemma-3-12b-it-GGUF
Text Generation
•
12B
•
Updated
•
126k
•
14
mlx-community/GLM-4.5-Air-3bit
Text Generation
•
107B
•
Updated
•
449
•
31
MaziyarPanahi/Ministral-3-14B-Reasoning-2512-GGUF
Text Generation
•
14B
•
Updated
•
63.2k
•
1
alexgusevski/Youtu-LLM-2B-q3-mlx
Text Generation
•
0.2B
•
Updated
•
11
•
1
kaitchup/Llama-2-7b-gptq-3bit
Text Generation
•
Updated
•
14
clibrain/Llama-2-7b-ft-instruct-es-gptq-3bit
Text Generation
•
Updated
•
12
•
3
clibrain/Llama-2-13b-ft-instruct-es-gptq-3bit
Text Generation
•
Updated
•
8
•
3
MiNeves-tops/opt-125m-gptq-3bit
Text Generation
•
Updated
•
10
Text Generation
•
Updated
•
7
LoneStriker/Yi-6B-200K-3.0bpw-h6-exl2
Text Generation
•
Updated
•
6
danny0122/Llama-2-7b-hf-gptq-3bits
Text Generation
•
6B
•
Updated
•
6
danny0122/Llama-2-7b-hf-gptq-3bitssafe
Text Generation
•
6B
•
Updated
•
5
danny0122/stablelm-base-alpha-3b-gptq-3bits
Text Generation
•
3B
•
Updated
•
5
danny0122/stablelm-base-alpha-3b-gptq-3bitssafe
Text Generation
•
3B
•
Updated
•
3
SicariusSicariiStuff/Tenebra_PreAlpha_128g_3BIT
Text Generation
•
31B
•
Updated
•
5
mahihossain666/llama-2-70b-hf-quantized-3bits-GPTQ
Text Generation
•
65B
•
Updated
•
5
SicariusSicariiStuff/Tenebra_PreAlpha_No_Group_g_3BIT
Text Generation
•
31B
•
Updated
•
4
kaitchup/Mistral-7B-v0.1-gptq-3bit
Text Generation
•
7B
•
Updated
•
6
kaitchup/Llama-2-13b-hf-gptq-3bit
Text Generation
•
12B
•
Updated
•
5
kaitchup/Llama-2-7b-hf-gptq-3bit
Text Generation
•
6B
•
Updated
•
817
•
1