Lance
clevnumb
AI & ML interests
None yet
Organizations
None yet
clevnumb's activity
Which Quant of this model will fit in VRAM entirely on a single 24GB video card (4090)
3
#1 opened about 2 months ago
by
clevnumb
My alternate quantizations.
5
#3 opened 2 months ago
by
ZeroWw
Not loading in Latest Tabby (with SillyTavern) - ERROR
2
#2 opened 2 months ago
by
clevnumb
what quant should I use to use this with a single 24GB video card (PC) (4090 card)?
1
#2 opened 3 months ago
by
clevnumb
which quaint to I use to fit on a single 24GB video card on a PC Running Windows 11? (4090)
3
#3 opened 3 months ago
by
clevnumb
Single 4090 using OogaBooga? (Windows 11, 96GB of RAM)
1
#1 opened 8 months ago
by
clevnumb
How do I load this in OogaBooga? (text-generation-webui)
#1 opened 6 months ago
by
clevnumb
Are there safetensor files for the models?
7
#37 opened 8 months ago
by
wonderflex
Will this fit on a single 24GB Video card (4090)?
1
#2 opened 7 months ago
by
clevnumb
Glacially slow on a RTX 4090??
5
#1 opened 8 months ago
by
clevnumb
Which of these 34B model BPW will fit on a single 24GB card's (4090) VRAM?
9
#1 opened 11 months ago
by
clevnumb
How does this compare to LLama 13b models for "visual smarts"
1
#2 opened 11 months ago
by
clevnumb
What are the different files for?
2
#9 opened over 1 year ago
by
Arya123456
Could this model be loaded in 3090 GPU?
24
#6 opened over 1 year ago
by
Exterminant
Is it unfiltered/uncensored?
2
#2 opened over 1 year ago
by
sneedingface
Thank you very much!
10
#2 opened over 1 year ago
by
AiCreatornator
Will this run on a 128GB Ram system (ir-13900k) with a RTX 4090?
3
#2 opened over 1 year ago
by
clevnumb
Can the latest version of URPM be updated to here?
#1 opened over 1 year ago
by
clevnumb