Text Generation
GGUF
English
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
horror
mistral nemo
mergekit
Inference Endpoints
File size: 8,890 Bytes
38129d3 0153150 38129d3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 |
---
license: apache-2.0
language:
- en
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- horror
- mistral nemo
- mergekit
pipeline_tag: text-generation
---
(quants uploading, examples to be added)
<h2><font color="green"> Mistral-Nemo-WORDSTORM-pt3-RCM-POV-Nightmare-18.5B-Instruct </font></h2>
<img src="nightmare.jpg" style="float:right; width:300px; height:300px; padding:10px;">
<B><font color="red">WARNING:</font> NSFW. Ultra Detailed. HORROR, VIOLENCE. Swearing. UNCENSORED. SMART.</B>
Story telling, writing, creative writing and roleplay running all on Mistral Nemo's 128K+ new core.
This is a massive super merge takes all the power of the following 3 powerful models and combines them into one.
This model contains "RCM":
- Mistral Nemo model at 18.5B consisting of "MN-Rocinante-12B-v1.1" and "Mistral Nemo Instruct 12B"
- Mistral Nemo model at 18.5B consisting of "MN-12B Celeste-V1.9" and "Mistral Nemo Instruct 12B"
- Mistral Nemo model at 18.5B consisting of "MN-Magnum-v2.5-12B-kto" and "Mistral Nemo Instruct 12B".
<B>Details on the core models:</B>
"nothingiisreal/MN-12B-Celeste-V1.9" is #1 (models 8B,13B,20B) on the UGI leaderboard ("UGI" sort),
is combined with "Mistral Nemo Instruct 12B" (ranked #4 under "writing" models 8B,13B,20B at UGI )
"anthracite-org/magnum-v2.5-12b-kto" is #1 (models 8B,13B,20B) on the UGI leaderboard ("Writing" sort),
is combined with "Mistral Nemo Instruct 12B" (ranked #4 under "writing" models 8B,13B,20B at UGI )
"TheDrummer/Rocinante-12B-v1.1" is very high scoring model (models 8B,13B,20B) on the UGI Leaderboard
(sort "UGI"), is combined with "Mistral Nemo Instruct 12B" (ranked #4 under "writing" models 8B,13B,20B at UGI )
"mistralai/Mistral-Nemo-Instruct-2407" is very high scoring model (models 8B,13B,20B) on the UGI Leaderboard (sort "writing")
and is the base model of all the above 3 fine tuned models.
[ https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard ]
<B>About this model:</B>
This super merge captures the attibutes of all these top models and makes them even stronger:
- Instruction following
- Story output quality
- Character
- Internal thoughts
- Voice
- Humor
- Details, connection to the world
- General depth and intensity
- Emotional connections.
- Prose quality
This super merge is also super stable (a hairs breath from Mistral Nemo's ppl), and runs with all parameters and settings.
10 versions of this model will be released, this is release #3 - "part 3".
<B>POV Nightmare?</B>
This model put the user / character in nightmare situations.
It does not hold back.
Usually I release one or two versions from the "best of the lot", however in this case all
of the versions turned out so well - all with their own quirks and character - that I will be
releasing all 10.
An additional series 2 and 3 will follow these 10 models as well.
(examples generations below)
Model may produce NSFW content : Swearing, horror, graphic horror, distressing scenes, etc etc.
This model has an INTENSE action AND HORROR bias, with a knack for cliffhangers and surprises.
It is not as "dark" as Grand Horror series, but it as intense.
This model is perfect for any general, fiction related or roleplaying activities and has a 128k+ context window.
This is a fiction model at its core and can be used for any genre(s).
WORDSTORM series is a totally uncensored, fiction writing monster and roleplay master. It can also be used for
just about any general fiction (all genres) activity including:
- scene generation
- scene continuation
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- storytelling
- writing
- fiction
- roleplaying
- rp
- graphic horror
- horror
- dark humor
- nsfw
- and can be used for any genre(s).
<B>Templates to Use:</B>
The template used will affect output generation and instruction following.
Alpaca:
<pre>
{
"name": "Alpaca",
"inference_params": {
"input_prefix": "### Instruction:",
"input_suffix": "### Response:",
"antiprompt": [
"### Instruction:"
],
"pre_prompt": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
}
}
</pre>
Chatml:
<pre>
{
"name": "ChatML",
"inference_params": {
"input_prefix": "<|im_end|>\n<|im_start|>user\n",
"input_suffix": "<|im_end|>\n<|im_start|>assistant\n",
"antiprompt": [
"<|im_start|>",
"<|im_end|>"
],
"pre_prompt": "<|im_start|>system\nPerform the task to the best of your ability."
}
}
</pre>
Mistral Instruct:
<pre>
{
"name": "Mistral Instruct",
"inference_params": {
"input_prefix": "[INST]",
"input_suffix": "[/INST]",
"antiprompt": [
"[INST]"
],
"pre_prompt_prefix": "",
"pre_prompt_suffix": ""
}
}
</pre>
<b>Optional Enhancement:</B>
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.
This enhancement WAS NOT used to generate the examples below.
<h3>MODELS USED:</h3>
Special thanks to the incredible work of the model makers "mistralai" "TheDrummer", "anthracite-org", and "nothingiisreal".
Models used:
[ https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407 ]
[ https://huggingface.co/TheDrummer/Rocinante-12B-v1.1 ]
[ https://huggingface.co/anthracite-org/magnum-v2.5-12b-kto ]
[ https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9 ]
This is a four step merge (3 pass-throughs => "Fine-Tune" / "Instruct") then "mated" using "DARE-TIES".
In involves these three models:
[ https://huggingface.co/DavidAU/MN-18.5B-Celeste-V1.9-Story-Wizard-ED1-Instruct-GGUF ]
[ https://huggingface.co/DavidAU/MN-Magnum-v2.5-18.5B-kto-Story-Wizard-ED1-Instruct-GGUF ]
[ https://huggingface.co/DavidAU/MN-Rocinante-18.5B-v1.1-Story-Wizard-ED1-Instruct-GGUF ]
Combined as follows using "MERGEKIT":
<PRE>
models:
- model: E:/MN-Rocinante-18.5B-v1.1-Instruct
- model: E:/MN-magnum-v2.5-12b-kto-Instruct
parameters:
weight: .6
density: .8
- model: E:/MN-18.5B-Celeste-V1.9-Instruct
parameters:
weight: .38
density: .6
merge_method: dare_ties
tokenizer_source: union
base_model: E:/MN-Rocinante-18.5B-v1.1-Instruct
dtype: bfloat16
</PRE>
Special Notes:
Due to how DARE-TIES works, everytime you run this merge you will get a slightly different model.
This is due to "random" pruning method in "DARE-TIES".
Mistral Nemo models used here seem acutely sensitive to this process.
"tokenizer_source: union" is used so that multiple "templates" work and each fine tune uses one or two of the templates.
<h3>EXAMPLES PROMPTS and OUTPUT:</h3>
Examples are created using quant Q4_K_M, "temp=.8", minimal parameters and "Mistral Instruct" template.
Model has been tested with "temp" from ".1" to "5".
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
Examples will be posted soon... |