Vikhr-Nemo fine-tuned with contrastive Russian literature.
- Base model: https://huggingface.co/Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24
- Dataset: https://huggingface.co/datasets/40umov/dostoevsky
- Method: ORPO
- Training config: https://github.com/IlyaGusev/saiga/blob/main/configs/models/doestoevsky_nemo_12b_orpo_m1.json
- WandB: https://wandb.ai/ilyagusev/rulm_self_instruct/runs/4v4pcgej
- Downloads last month
- 14