library_name: transformers | |
tags: | |
- mergekit | |
- merge | |
- mistral | |
- roleplay | |
## Info | |
This one is unhinged. Same idea as previous version, but with [ResplendentAI/Pandora_7B](https://huggingface.co/ResplendentAI/Pandora_7B). | |
SillyTavern presets included. | |
___ | |
# Irene-RP-v2-7B | |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). | |
## Merge Details | |
### Merge Method | |
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) as a base. | |
### Models Merged | |
The following models were included in the merge: | |
* Mergekit/Eros-Erebus-Holodeck-7B | |
* [ResplendentAI/Pandora_7B](https://huggingface.co/ResplendentAI/Pandora_7B) | |
* [Virt-io/Nina-v2-7B](https://huggingface.co/Virt-io/Nina-v2-7B) | |
### Configuration | |
The following YAML configuration was used to produce this model: | |
```yaml | |
models: | |
- model: Endevor/InfinityRP-v1-7B | |
parameters: | |
weight: 1.0 | |
- model: Virt-io/Nina-v2-7B | |
parameters: | |
weight: 0.55 | |
- model: Mergekit/Eros-Erebus-Holodeck-7B | |
parameters: | |
weight: 0.65 | |
- model: ResplendentAI/Pandora_7B | |
parameters: | |
weight: 0.55 | |
merge_method: task_arithmetic | |
base_model: Endevor/InfinityRP-v1-7B | |
parameters: | |
normalize: true | |
int8_mask: true | |
dtype: float16 | |
``` | |