kunoichi-lemon-royale-7B
Lightly tested with both Alpaca and ChatML prompts. Works with temperature 1.0 and minP 0.01, but feel free to vary it up. Tested to 8K context.
This model has a tendency to lean into revealing character interiority when generating narrative, which some people might find interesting. I found the model good with not only following the character card but also taking strong hints from the first message. This experimental model may occasionally reveal context, unfortunately.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using SanjiWatsuki/Kunoichi-7B as a base. Each of the models had strengths I liked to varying degrees, leading to weights and densities being adjusted in aesthetic proportion.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: SanjiWatsuki/Kunoichi-7B
# no parameters necessary for base model
- model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
weight: 0.3
density: 0.4
- model: core-3/kuno-royale-v2-7b
parameters:
weight: 0.3
density: 0.4
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
weight: 0.4
density: 0.8
merge_method: dare_ties
base_model: SanjiWatsuki/Kunoichi-7B
parameters:
int8_mask: true
normalize: true
dtype: bfloat16
- Downloads last month
- 82