Michael Han
shimmyshimmer
AI & ML interests
None yet
Recent Activity
upvoted
a
collection
24 minutes ago
Llama 4
new activity
about 2 hours ago
unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-bnb-4bit:Model issue with 64GB ram
liked
a model
about 2 hours ago
unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-bnb-4bit
Organizations
shimmyshimmer's activity
Model issue with 64GB ram
2
#4 opened about 16 hours ago
by
llama-anon

Does not support multimodal input
1
#5 opened 2 days ago
by
RamboRogers

This looks like the base and not bnb-4bit?
4
#2 opened 1 day ago
by
bjj

What's up with the minimal decrease in file size from fp16?
1
#1 opened 1 day ago
by
jth01
Does not work at all
3
#1 opened 13 days ago
by
zokica
VRAM requirements?
2
#3 opened 5 days ago
by
hamaadtahiir
Running Model "unsloth/DeepSeek-V3-0324-GGUF" with vLLM does not working
2
#11 opened 6 days ago
by
puppadas

Update README.md
1
#2 opened 7 days ago
by
seniichev

671B params or 685B params?
6
#8 opened 8 days ago
by
createthis
The UD-IQ2_XXS is surprisingly good, but it's good to know that it degrades gradually but significantly after about 1000 tokens.
1
#9 opened 7 days ago
by
mmbela
quantize.* missing
2
#10 opened 7 days ago
by
phymbert

Can`t see the image in LM studio.
4
#1 opened 14 days ago
by
Technobiotik
Any plan for IQS_XS or IQS_XXS ?
1
#6 opened 9 days ago
by
bobchenyx
Added IQ1_S version to Ollama
3
#4 opened 12 days ago
by
Muhammadreza

Is the 2.51bit model using imatrix?
7
#3 opened 12 days ago
by
daweiba12
Will you release the imatrix.dat used for the quants?
2
#2 opened 12 days ago
by
tdh111
Would There be Dynamic Qunatized Versions like 2.51bit
8
#1 opened 13 days ago
by
MotorBottle
Error loading model with vllm
1
#1 opened 18 days ago
by
ggeo
Cannot input image in ollama for gemma-3-27b-it-GGUF:Q4_K_M
9
#3 opened 20 days ago
by
kitc