Kartoffel-Deepfry-12B

mistral-nemo-kartoffel-12B finetuned on Schule-DPO

Method

QLoRA ORPO tuned with 1x RTX A6000 for 5 epochs. Rank 16 LoRA 32 alpha, 2e-4 learning rate cosine schedule.

Downloads last month
30
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nbeerbower/Kartoffel-Deepfry-12B

Dataset used to train nbeerbower/Kartoffel-Deepfry-12B