|
--- |
|
license: apache-2.0 |
|
--- |
|
license: apache-2.0 |
|
--- |
|
![download.png](https://raw.githubusercontent.com/Fischherboot/Aculi/main/watermark-no-bg.png) |
|
|
|
|
|
# mistral-doryV2-12b-GGUF |
|
|
|
# Consider using Koboldcpp: |
|
[Koboldcpp](https://github.com/LostRuins/koboldcpp) |
|
|
|
|
|
# Original Modelcard: |
|
|
|
|
|
# Dory 12b (v2) |
|
(redone) redone instruct finetune of mistral nemo 12b's base. *not* (E)RP-focused, leave that to drummer. |
|
|
|
![image/gif](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/BiBtgV_WEIha72WqETWfk.gif) |
|
|
|
thanks to twisted again for the compute :3 |
|
|
|
## Prompting |
|
alpaca-like: |
|
``` |
|
### System: |
|
[Optional system prompt] |
|
|
|
### Instruction: |
|
[Query] |
|
|
|
### Response: |
|
[Response]</s> |
|
|
|
### Instruction: |
|
[...] |
|
``` |
|
|
|
## Training details |
|
Rank 64 QDoRA, trained on the following data mix: |
|
- All of [kalomaze/Opus_Instruct_3k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_3k) |
|
- All conversations with a reward model rating above 5 in [Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered) |
|
- 50k of [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned) |
|
- All stories above 4.7 rating and published before 2020 in [Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered](https://huggingface.co/datasets/Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered) |
|
|