5.0bpw h6 exl2 quant of : (https://huggingface.co/BeaverAI/mistral-dory-12b)
Dory 12b
redone instruct finetune of mistral nemo 12b. not (E)RP-focused, leave that to drummer.
thanks to twisted for the compute :3
Prompting
alpaca-like:
### System:
[Optional system prompt]
### Instruction:
[Query]
### Response:
[Response]<EOT>
### Instruction:
[...]
Training details
Rank 64 QDoRA, trained on the following data mix:
- All of kalomaze/Opus_Instruct_3k
- All conversations with a reward model rating above 5 in Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered
- 50k of Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- All stories above 4.7 rating and published before 2020 in Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Natkituwu/mistral-dory-12b-5.0bpw-exl2
Base model
mistralai/Mistral-Nemo-Base-2407