lichaochao
chaochaoli
AI & ML interests
None yet
Organizations
None yet
chaochaoli's activity
where is high-quality multi-turn dataset?
#4 opened 26 days ago
by
chaochaoli
感觉这个领域,中文模型可能更有市场一点。
2
#11 opened 3 months ago
by
haowu11
1_Pooling 这个目录的作用是?
7
#72 opened 3 months ago
by
Godsing
Fine-tuning
2
#5 opened 3 months ago
by
hdfdg
中文理解有点差
1
#2 opened 5 months ago
by
chaochaoli
int4-gguf推理很容易出现重复现象
#1 opened 5 months ago
by
chaochaoli
gguf啥时候可以提供,感谢大佬
#1 opened 5 months ago
by
chaochaoli
Possible to do inference on long contexts with limited VRAM?
1
#6 opened 6 months ago
by
danabo
qwen1.5-7b-chat是不是推理起来比qwen1.5-7b快很多
3
#9 opened 7 months ago
by
endNone
1.8b系列大海捞针的模型对比
1
#2 opened 8 months ago
by
chaochaoli
256k大概需要多少显存才可以支持?
8
#2 opened 9 months ago
by
chaochaoli
How to fine tune this model with the Trainer API?
1
#8 opened 12 months ago
by
duzm
Train Code
2
#2 opened over 1 year ago
by
robinsongh381
synthetic-instruct-gptj-pairwise pairwise data how to pre-process for train data
2
#9 opened 10 months ago
by
chaochaoli
tokenizer.model_max_length for llama-2-7b-chat-hf
3
#3 opened about 1 year ago
by
huggingFace1108
你们用4bit有感觉到慢吗
2
#1 opened about 1 year ago
by
chaochaoli
What's the difference between this and the official llama version?
3
#4 opened about 1 year ago
by
chaochaoli
您好,请问可以提供chaglm原始模型转换成"1-gpu-fp16.h5"模型文件的脚本吗
2
#38 opened over 1 year ago
by
LeoNiko
How to convert hugging face checkpoints to the file "1-gpu-fp16.bin" ?
2
#41 opened over 1 year ago
by
aimarbenzemamessi
Would you plan to optimize ChatGLM2-6B? and when?
4
#47 opened about 1 year ago
by
Zuyuan
有ft加速版吗,很需要,万分感谢
1
#76 opened about 1 year ago
by
chaochaoli
前腾讯er,跪求支持p-tuning
3
#37 opened over 1 year ago
by
lpan1010
可以处理更长的上下文,那么max_length应该设置更长?
3
#15 opened over 1 year ago
by
chaochaoli
注意,这不是chat版本
2
#10 opened over 1 year ago
by
chaochaoli
how train in my domain
6
#2 opened over 1 year ago
by
chaochaoli