DAN™
dranger003
AI & ML interests
None yet
Recent Activity
updated
a model
4 days ago
dranger003/c4ai-command-r7b-12-2024-GGUF
new activity
4 days ago
dranger003/starcoder2-15b-GGUF:Chat template for GPT4All
Organizations
None yet
dranger003's activity
Chat template for GPT4All
1
#3 opened 4 days ago
by
IceLegendWolf
Multiple GPUs for inference error
8
#2 opened 6 months ago
by
Mostudy
Update README.md with license information
#1 opened 6 months ago
by
Chen-01AI
Update README.md with license information
#1 opened 6 months ago
by
Chen-01AI
Update README.md with license information
#1 opened 6 months ago
by
Chen-01AI
Update README.md with license information
#2 opened 6 months ago
by
Chen-01AI
How to enable streaming for phi 3 vision model ?
6
#15 opened 7 months ago
by
bhimrazy
I'm generating a imatrix using `groups_merged.txt` if you want me to run any tests?
19
#15 opened 8 months ago
by
jukofyork
Is the KV cache of these models unusually high?
1
#6 opened 7 months ago
by
Hugsanir
How about a quantized version that fits in 16 GB of memory like wizardlm?
3
#19 opened 7 months ago
by
Zibri
Update chat templates
2
#5 opened 8 months ago
by
CISCai
Will you redo quants after your bpe pr gets merged?
2
#18 opened 8 months ago
by
ggnoy
can't use llama load gguf model
2
#6 opened 8 months ago
by
Tianyi000
35B-beta is realeased
4
#3 opened 8 months ago
by
tastypear
Update chat templates
6
#17 opened 8 months ago
by
CISCai
Can't merge files with gguf
7
#16 opened 8 months ago
by
zedmango
is it possible to use this model with LM Studio ??
2
#1 opened 8 months ago
by
michabbb
Can we get a Q4 without the IMat?
2
#14 opened 8 months ago
by
yehiaserag
Reuse your `ggml-dbrx-instruct-16x12b-q8_0-imatrix.dat` file?
20
#1 opened 9 months ago
by
jukofyork
prompt eval too slow
2
#4 opened 8 months ago
by
lfjmgs