Functional example of finetuning of Llama-2-7b-Chat-GPTQ
#26
by
echogit
- opened
I searched quite a lot and didn't find a functional example of finetuning and inference of the Llama-2-7b-Chat-GPTQ model.
I someone could point me to a example, preferably a google colab notebook, I would very much appreciate.