Ok, but actually it's pretty usable. I get acceptable speed in stable diffusion with directml. Also there is "Amuse" that's made for amd, works great. And if i need to run language models, vulkan brings speed.
rtuuuuuuuur
urtuuuu
AI & ML interests
None yet
Recent Activity
new activity
3 days ago
RekaAI/reka-flash-3:Crazy good for its size
new activity
3 days ago
lmstudio-community/reka-flash-3-GGUF:Settings in LM Studio?
Organizations
None yet
urtuuuu's activity
replied to
MonsterMMORPG's
post
1 day ago
Crazy good for its size
6
#11 opened 3 days ago
by
rombodawg

Settings in LM Studio?
#1 opened 3 days ago
by
urtuuuu
Prompt template
14
#1 opened 7 days ago
by
YearZero
Simple questions too hard?
6
#22 opened 5 days ago
by
urtuuuu
replied to
MonsterMMORPG's
post
4 days ago
I wonder, is there a tutorial on how to run it on AMD, in directml mode. I tried once, didn't work ... 1.3B maximum i could run, but it gave me some errors in comfyui.
CPU mode worked but too slow.
I have ryzen 7735hs which can use ram for vram, 8GB
Omitted <think> at the start and almost 10k tokens to debug 2 JS functions
3
#2 opened 5 days ago
by
operationdarkside
[System prompt inside] Poor man's R1 based on Gemma 3
11
#7 opened 6 days ago
by
MrDevolver

LM Studio vs llama.cpp different results?
6
#5 opened 10 days ago
by
urtuuuu
LM Studio problems for Jinja template of QwQ-32B
4
#2 opened 12 days ago
by
ISK-VAGR
Wowowowow
27
#1 opened 13 days ago
by
owao
Something wrong
12
#3 opened 13 days ago
by
wcde
Output repeating
29
#1 opened 20 days ago
by
getfit

Repeated Thinking Tags in Output Generation
10
#2 opened 20 days ago
by
xldistance
Missing Node Types
3
#7 opened 18 days ago
by
ACCA225