MC
mclassHF2023
AI & ML interests
None yet
Recent Activity
new activity
5 days ago
bartowski/DeepSeek-V2.5-1210-GGUF:4k context by default?
new activity
8 days ago
maldv/Qwentile2.5-32B-Instruct:More QwQ?
new activity
15 days ago
mradermacher/QwQ-32B-Preview-Self-instruct-3x-TIES-v1.0-i1-GGUF:bad output
Organizations
None yet
mclassHF2023's activity
4k context by default?
2
#1 opened 15 days ago
by
mclassHF2023
More QwQ?
5
#2 opened 15 days ago
by
mclassHF2023
bad output
1
#1 opened 15 days ago
by
mclassHF2023
"Supports a context length of 160k through yarn settings."
1
#1 opened 24 days ago
by
mclassHF2023
calme-2.1 for qwen2-7b?
1
#17 opened 6 months ago
by
mclassHF2023
Why context 8k?
1
#4 opened 6 months ago
by
mclassHF2023
Context length of the model?
7
#30 opened 6 months ago
by
shipWr3ck
4k context by default?
1
#1 opened 6 months ago
by
mclassHF2023
which mistral version in the merge?
1
#1 opened 6 months ago
by
mclassHF2023
context size
1
#1 opened 6 months ago
by
lightsoutallout
Great work!
1
#3 opened 7 months ago
by
mclassHF2023
blocky blocky blocky
3
#1 opened 7 months ago
by
mclassHF2023
no system message?
8
#14 opened 8 months ago
by
mclassHF2023
what kind of model is this?
#1 opened 8 months ago
by
mclassHF2023
32k or 8k context?
2
#1 opened 8 months ago
by
mclassHF2023
higher context with alpha_value=2.5
#1 opened 8 months ago
by
mclassHF2023
original model gone / output bad for 16k context?
1
#1 opened 8 months ago
by
mclassHF2023
model doesn't stop generating
1
#2 opened 8 months ago
by
mclassHF2023
Higher context support?
1
#4 opened 9 months ago
by
aayushg159
GGUF
1
#1 opened 8 months ago
by
mclassHF2023