JJ
J22
AI & ML interests
None yet
Recent Activity
new activity
12 days ago
inclusionAI/Ling-lite:gguf support request
new activity
13 days ago
Team-ACE/ToolACE-2-Llama-3.1-8B:about the name & llama license
new activity
16 days ago
upstage/solar-pro-preview-instruct:i tested the q8 in lm studio( getting wrong output)
Organizations
None yet
J22's activity
gguf support request
7
#2 opened 14 days ago
by
Doctor-Chad-PhD

about the name & llama license
3
#3 opened 13 days ago
by
J22
i tested the q8 in lm studio( getting wrong output)
8
#9 opened 7 months ago
by
gopi87
GGUF (ollama) version are far from API version
2
#20 opened 6 months ago
by
papipsycho
chatllm.cpp supports this
#5 opened 17 days ago
by
J22
Can't wait for HF? try chatllm.cpp
6
#7 opened 22 days ago
by
J22
Requesting Support for GGUF Quantization of Baichuan-M1-14B-Instruct through llama.cpp
3
#1 opened 2 months ago
by
Doctor-Chad-PhD

Request for GGUF support through llama.cpp
2
#1 opened 2 months ago
by
Doctor-Chad-PhD

is rope_theta and max_pos_emb correct?
#4 opened 2 months ago
by
J22
Run this easily with chatllm.cpp
#5 opened about 1 month ago
by
J22
Run this with chatllm.cpp
3
#5 opened about 1 month ago
by
J22
๐ฉ Report: Ethical issue(s)
6
#176 opened about 2 months ago
by
lzh7522
Vllm
2
#2 opened 2 months ago
by
TitanomTechnologies
is `config.json` correct?
#4 opened 3 months ago
by
J22
Quick start with chatllm.cpp
#4 opened 3 months ago
by
J22
Upload tokenizer.json
1
#1 opened 5 months ago
by
J22
a horrible function in `modeling_mobilellm.py`
1
#5 opened 5 months ago
by
J22
Run this on CPU
#6 opened 7 months ago
by
J22
Run on CPU
1
#13 opened 7 months ago
by
J22
need gguf
19
#4 opened 8 months ago
by
windkkk