Mistral-Nemo-Gutenberg-Doppel-12B
mistralai/Mistral-Nemo-Instruct-2407 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with an RTX 3090 for 3 epochs.
- Downloads last month
- 51
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B
Base model
mistralai/Mistral-Nemo-Base-2407
Finetuned
mistralai/Mistral-Nemo-Instruct-2407