![](https://huggingface.co/invisietch/RainbowsEdge-v0.1-rc2-18B-GGUF/resolve/main/header.png)
Listen here, to my tale so far
About the magic that tames the cold
For every {{user}}, and every {{char}}
At the Rainbow's Edge lies a pot of gold
Model Details
This is a release candidate model. That means it has not been tested as extensively as my full releases and as such you should expect various issues. The primary purpose of uploading this model is to get feedback.
Rainbow's Edge is a passthrough upscale of two highly capable Llama 3 8B roleplay models, with a goal to create a very uncensored and very capable release model which matches a lot of GPUs.
The Q6_K runs well on a 24GB GPU.
The Q5_K_S runs well on a 16GB GPU.
Due to this being a Llama 3 (not Llama 3.1) merge, the base context is 8192. However, the config for RainbowsEdge-v0.1-rc1-18B was corrupted and has now been repaired. In limited testing, this model now supports context beyond 8192 with rope and degraded performance. Catastrophic degredation does not occur until at least 16384.
This model is otherwise identical to RainbowsEdge-v0.1-rc1-18B.
Feedback
Please, please, please, give feedback on this merge. It is simply the result of a theory and minimally tested to ensure that it doesn't completely collapse. I welcome feedback in the community section here, or on Discord (name is the same as on here). I can be found on the Chub & SillyTavern Discord servers in the local LLM channels frequently too.
Quantization Formats
As this is a release candidate, I have only provided GGUFs.
Disclaimer
This model is built on an abliterated base and as such is largely uncensored. It can generate explicit, disturbing or offensive responses. Use responsibly. I am not responsible for your use of this model.
Settings
Samplers
I'm using these sampler settings:
- Context: 12800 (with koboldcpp automatic ROPE)
- Temperature: 0.8
- Top P: 0.9
- Min P: 0.08
- Repetition Penalty: 1.11 (or DRY)
- Rep Pen Range: 1536
These are by no means the perfect settings, feel free to experiment.
Prompting Format
I'd recommend Llama-3 Instruct prompting format:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
Some of the models included in the merge were trained on ChatML & Alpaca so you can try those. I have not tested them.
Example Storywriting
N/A.
Merge Strategy
Models Used
The following models were used to create RainbowsEdge v0.1 RC2:
Mergekit Config
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 24]
model: invisietch/EtherealRainbow-v0.3-8B
- sources:
- layer_range: [8,24]
model: SicariusSicariiStuff/Dusk_Rainbow
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8,24]
model: invisietch/EtherealRainbow-v0.3-8B
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8,24]
model: SicariusSicariiStuff/Dusk_Rainbow
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 32]
model: invisietch/EtherealRainbow-v0.3-8B
- Downloads last month
- 12