John Leimgruber III
ubergarm
AI & ML interests
Open LLMs and Astrophotography image processing.
Recent Activity
liked
a model
1 day ago
bartowski/Sky-T1-32B-Preview-GGUF
liked
a model
1 day ago
mradermacher/Sky-T1-32B-Preview-GGUF
liked
a model
1 day ago
NovaSky-AI/Sky-T1-32B-Preview
Organizations
None yet
ubergarm's activity
Nice ~90x real-time generation on 3090TI. Quickstart provided.
3
#20 opened 7 days ago
by
ubergarm
Observation: 4-bit quantization can't answer the Strawberry prompt
12
#2 opened 3 months ago
by
ThePabli
63.17 MMLU-Pro Computer Science with `Q8_0`
#2 opened 3 months ago
by
ubergarm
Benchmarks worse than Qwen2.5-7B-Instruct on MMLU-Pro Computer Science in limited testing.
#1 opened 3 months ago
by
ubergarm
Promising looking results on 24GB VRAM folks!
9
#3 opened 4 months ago
by
ubergarm
Awesome model
6
#5 opened 4 months ago
by
dillfrescott
vram usage of each?
3
#1 opened 4 months ago
by
jasonden
Works good generating python on my 64GB RAM w/ 3090TI 24GB VRAM dev box
3
#2 opened 6 months ago
by
ubergarm
Chat template
3
#3 opened 6 months ago
by
sydneyfong
Can you please provide the command to change the context size?
5
#1 opened 6 months ago
by
yehiaserag
The first GGUF that works with long context on llama.cpp!
3
#1 opened 6 months ago
by
ubergarm
And where is the GGUF file itself?
12
#1 opened 6 months ago
by
Anonimus12345678902
Got it working in llama.cpp! Thanks!
1
#1 opened 6 months ago
by
ubergarm
Error loading model in llama.cpp ?
8
#1 opened 6 months ago
by
ubergarm
Prompt Format
4
#6 opened 8 months ago
by
JamesConley
Quantized model coming?
8
#3 opened 9 months ago
by
dnhkng
Output is empty
2
#3 opened 8 months ago
by
bingw5
The f16 with 32k ctx fits nicely in 24GB VRAM
5
#3 opened 9 months ago
by
ubergarm
AttributeError: 'generator' object has no attribute 'image_embeddings'
1
#26 opened 11 months ago
by
MohamedRashad