nbeerbower/flammen17-py-DPO-v1-7B AWQ

image/png

Model Summary

A Mistral 7B LLM built from merging pretrained models and finetuning on Jon Durbin's py-dpo-v0.1.

Finetuned using an A100 on Google Colab. 🙏

Fine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne

Downloads last month
74
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.

Model tree for solidrust/Flammen-Trismegistus-7B-AWQ

Quantized
(3)
this model

Dataset used to train solidrust/Flammen-Trismegistus-7B-AWQ

Collection including solidrust/Flammen-Trismegistus-7B-AWQ