Fine-tuning?
#14 opened over 1 year ago
by
OSK-Creative-Tech
The model is not responding.
#13 opened over 1 year ago
by
PhelixZhen
![](https://cdn-avatars.huggingface.co/v1/production/uploads/64b7caacf53ae848e72ed8a1/xtcGq8HV9dw2XsnN-0MKw.jpeg)
model responses not good.
1
#12 opened over 1 year ago
by
muneerhanif7
how to quant llama2 70b model with AutoGPTQ
4
#11 opened over 1 year ago
by
tonycloud
Wrong shape when loading with Peft-AutoGPTQ
2
#10 opened over 1 year ago
by
tridungduong16
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6454fa48b27940efcb944bb9/3GcYK4RXljPSjBUgQEArL.png)
Long waiting time
14
#9 opened over 1 year ago
by
wempoo
Context Length Differences
#7 opened over 1 year ago
by
zacharyrs
Problems with temperature when using with python code.
3
#6 opened over 1 year ago
by
matchaslime
Should we expect GGML soon?
3
#5 opened over 1 year ago
by
yehiaserag
Issue with 64g version?
#4 opened over 1 year ago
by
AARon99
The `main` branch for TheBloke/Llama-2-70B-GPTQ appears borked
11
#3 opened over 1 year ago
by
Aivean
I found an fp16 model if it helps
1
#2 opened over 1 year ago
by
rombodawg
![](https://cdn-avatars.huggingface.co/v1/production/uploads/642cc1c253e76b4c2286c58e/fGtQ_QeTjUgBhIT89dpUt.jpeg)
❤️❤️❤️❤️
1
#1 opened over 1 year ago
by
SinanAkkoyun
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dcff68a8877129a1574f33/O-8C_Wy8nr_zo8TudBF1k.jpeg)