image/png

🧪 Just Another Model Experiment

This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

Mistral-Nemo-Prism-12B-v6

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.

The goal was to reduce archaic language and purple prose in a completely uncensored model.

Method

ORPO tuned with 8x A40 for 10 epochs.

For this version, LoRA rank was increased to 128 from 16.

Downloads last month
22
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for nbeerbower/Mistral-Nemo-Prism-12B-v6

Finetuned
(14)
this model
Quantizations
1 model

Datasets used to train nbeerbower/Mistral-Nemo-Prism-12B-v6