Grimulkan
grimulkan
AI & ML interests
None yet
Recent Activity
new activity
27 days ago
grimulkan/aurelian-v0.5-70b-rope8-32K-2.4bpw_h6_exl2:10k you say...
upvoted
a
collection
4 months ago
PixMo
updated
a model
5 months ago
grimulkan/Llama-3.2-11B-Vision-Instruct-Hermes-3-lorablated
Organizations
None yet
grimulkan's activity
10k you say...
4
#1 opened 5 months ago
by
Doomed1986

max_position_embedding
1
#5 opened 10 months ago
by
the-hir0
EXL quantization at 4.0bpw?
3
#1 opened about 1 year ago
by
lazydog22
Further fine-tuning?
2
#4 opened about 1 year ago
by
jspr

Comparison with 0.1
13
#3 opened about 1 year ago
by
ChuckMcSneed

Benchmarks!
3
#2 opened about 1 year ago
by
ChuckMcSneed

Arch speculations
6
#3 opened about 1 year ago
by
grimulkan

More quants
1
#2 opened about 1 year ago
by
aikitoria

Observations+benchmarks
1
#1 opened about 1 year ago
by
ChuckMcSneed

Merging advice?
16
#2 opened about 1 year ago
by
sophosympatheia

2 bits GGUF SOTA quants?
31
#2 opened about 1 year ago
by
Nexesenex

Quantization calibration dataset
2
#1 opened about 1 year ago
by
Amajiro
Devolution into semi-nonsensical long words
6
#1 opened about 1 year ago
by
jspr

Feedback and collaboration.
37
#1 opened almost 2 years ago
by
Squish42
You made gold!
3
#1 opened about 1 year ago
by
ChuckMcSneed

Repetition issue
1
#1 opened over 1 year ago
by
lazyDataScientist

GGUF parameters suggestion
1
#1 opened over 1 year ago
by
lazyDataScientist

Space after [/INST]
7
#2 opened over 1 year ago
by
Satya93
Space after [/INST]
7
#2 opened over 1 year ago
by
Satya93