metadata
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- trl
- sft
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
datasets:
- 922-CA/MoCha_v1a
Moniphi-3-v1:
- AKA LLilmonix3b-v1
- Phi-3-mini-4k-instruct fine-tuned for Monika character from DDLC
- Fine-tuned on a dataset of ~600+ items (dialogue scraped from game, reddit, and Twitter augmented by l2-7b-monika-v0.3c1 to turn each into snippets of multi-turn chat dialogue between Player and Monika; this was then manually edited, with more manually crafted items including info about character added in)
- GGUFs
USAGE
This is meant to be mainly a chat model with limited RP ability.
For best results: replace "Human" and "Assistant" with "Player" and "Monika" like so:
\nPlayer: (prompt)\nMonika:
HYPERPARAMS
- Trained for ~1 epoch
- rank: 16
- lora alpha: 16
- lora dropout: 0.5
- lr: 2e-4
- batch size: 4
- warmup ratio: 0.1
- grad steps: 1
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
WARNINGS AND DISCLAIMERS
This model is meant to closely reflect the characteristics of Monika. Despite this, there is always the chance that "Monika" will hallucinate and get information about herself wrong or act out of character (especially for a model of this size).
Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk!