-
-
-
-
-
-
Inference Providers
Active filters:
codeqwen
Qwen/Qwen2.5-Coder-32B-Instruct
Text Generation
•
33B
•
Updated
•
154k
•
•
1.95k
Qwen/Qwen2.5-Coder-7B-Instruct
Text Generation
•
8B
•
Updated
•
283k
•
•
561
bartowski/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
11.1k
•
31
Qwen/Qwen2.5-Coder-1.5B
Text Generation
•
2B
•
Updated
•
398k
•
•
72
Qwen/Qwen2.5-Coder-1.5B-Instruct
Text Generation
•
2B
•
Updated
•
171k
•
•
88
Qwen/Qwen2.5-Coder-14B-Instruct
Text Generation
•
15B
•
Updated
•
138k
•
•
132
bartowski/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation
•
33B
•
Updated
•
13.3k
•
93
Qwen/Qwen2.5-Coder-0.5B
Text Generation
•
0.5B
•
Updated
•
30.2k
•
35
Qwen/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation
•
3B
•
Updated
•
23.9k
•
45
unsloth/Qwen2.5-Coder-14B-Instruct-GGUF
15B
•
Updated
•
1.61k
•
6
huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated
Text Generation
•
15B
•
Updated
•
65
•
7
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation
•
0.8B
•
Updated
•
5.78k
•
11
DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation
•
42B
•
Updated
•
316
•
21
Solaren/Qwen3-MOE-6x0.6B-3.6B-Writing-On-Fire-Uncensored-Q8_0-GGUF
Text Generation
•
2B
•
Updated
•
352
•
4
DavidAU/Qwen3-VLTO-TNG-12B-256k-NEO-imatrix-GGUF
Text Generation
•
12B
•
Updated
•
340
•
1
DavidAU/Qwen3-MOE-6x0.6B-3.6B-Writing-On-Fire-Uncensored
Text Generation
•
2B
•
Updated
•
27
•
6
mradermacher/Qwen3-MOE-6x0.6B-3.6B-Writing-On-Fire-Uncensored-GGUF
2B
•
Updated
•
154
•
1
mradermacher/Qwen3-MOE-6x0.6B-3.6B-Writing-On-Fire-Uncensored-i1-GGUF
2B
•
Updated
•
394
•
1
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int8
Text Generation
•
2B
•
Updated
•
1
•
1
Qwen/Qwen2.5-Coder-7B
Text Generation
•
8B
•
Updated
•
29.4k
•
•
126
lmstudio-community/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
2.45k
•
20
Qwen/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
25.1k
•
140
Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
5.38k
•
26
bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
2.29k
•
9
lmstudio-community/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
657
•
2
mlx-community/Qwen2.5-Coder-7B-Instruct-bf16
Text Generation
•
8B
•
Updated
•
200
•
2
mlx-community/Qwen2.5-Coder-7B-Instruct-8bit
Text Generation
•
2B
•
Updated
•
20
mlx-community/Qwen2.5-Coder-1.5B-Instruct-bf16
Text Generation
•
2B
•
Updated
•
8
mlx-community/Qwen2.5-Coder-1.5B-Instruct-8bit
Text Generation
•
0.4B
•
Updated
•
21