This LLM seems to be trolling me??
3
#9 opened 9 months ago
by
skynet24

Reducing Latency in Locally Hosted model
1
#8 opened 10 months ago
by
anshulchandel
Not working on M1 Max using llama-cpp-python
#7 opened over 1 year ago
by
shroominic

Missing tokenizer.model file
3
#6 opened over 1 year ago
by
whatever1983
not working
5
#3 opened over 1 year ago
by
imhsouna
Free and ready to use deepseek-coder-6.7B-instruct-GGUF model as OpenAI API compatible endpoint
#2 opened over 1 year ago
by
limcheekin
This model cannot be used normally
19
#1 opened over 1 year ago
by
hyunfzen