Text Generation
GGUF
English
creative
Uncensored
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
brainstorm 40x
swearing
rp
128k context
horror
llama 3.2
mergekit
Inference Endpoints
conversational
license: apache-2.0 | |
language: | |
- en | |
tags: | |
- creative | |
- Uncensored | |
- creative writing | |
- fiction writing | |
- plot generation | |
- sub-plot generation | |
- fiction writing | |
- story generation | |
- scene continue | |
- storytelling | |
- fiction story | |
- science fiction | |
- romance | |
- all genres | |
- story | |
- writing | |
- vivid prosing | |
- vivid writing | |
- fiction | |
- roleplaying | |
- bfloat16 | |
- brainstorm 40x | |
- swearing | |
- rp | |
- 128k context | |
- horror | |
- llama 3.2 | |
- mergekit | |
pipeline_tag: text-generation | |
(quants uploading... , examples to be added) | |
<h2>L3.2-Rogue-Creative-Instruct-Uncensored-7B-GGUF</h2> | |
<img src="rogue-ab.webp" style="float:right; width:300px; height:300px; padding:10px;"> | |
It is a LLama 3.2 model, max context of 131,072 (128k+). | |
This model has been designed to be relatively bullet proof and operates with most parameters, including temp settings from 0 to 5. | |
This is a an altered version of "Llama-3.2-3B-Instruct-uncensored" [ https://huggingface.co/chuanli11/Llama-3.2-3B-Instruct-uncensored ] using the Brainstorm 40x method developed by David_AU to drastically alter the models | |
prose output and abilities. This also expands the model by 39 layers (to 67 layers) to 7.54B parameters (605 tensors). | |
This model retains all the training of the original Llama 3.2 3B Instruct but now processes instructions | |
and generates outputs with a deeper context and stronger level. Llama 3.2's ability to follow instructions is | |
stronger than Llama 3 and 3.1 versions. | |
The "Abliterate" process decensors the model. You control censorship level(s) directly via prompt. Brainstorm 40x also enhances "decensoring". | |
This model is for any writing, fiction or story telling activity. | |
This version has unusual levels of detail (scene, location, surroundings, items) and details are more focused on the moment / characters due to "Brainstorm 40x". | |
It may work for role play and other activities. (see settings below) | |
It requires Llama3 template and/or "Command-R" template. | |
Example outputs below with multiple "regens" at different temps/rep pen settings. | |
Some examples show use of a PROSE CONTROL with a prompt to force the model to alter output generation. | |
<B>Model Notes:</B> | |
- Detail, prose and fiction writing abilities are significantly increased. | |
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s). | |
- Role-players: Careful raising temp too high as it may affect instruction following. | |
- This model works with rep pen of 1.05 or higher (see notes). | |
- If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s). | |
- The bias of this model is controlled directly by your prompts. | |
- For creative uses, different quants will produce slightly different output. | |
- Source code for this model will be uploaded at a separate repo shortly. | |
<B>Settings, Quants and Critical Operations Notes:</b> | |
This model has been modified ("Brainstorm") to alter prose output, and generally outputs longer text than average. | |
Change in temp (ie, .4, .8, 1.5, 2, 3 ) will drastically alter output. | |
Rep pen settings will also alter output too. | |
This model needs "rep pen" of 1.05 or higher as lower values may cause repeat paragraph issues at end of output however LOWER rep pen | |
values may result is very different (creative / unusual) generation too. | |
For role play: Rep pen of 1.1 to 1.14 is suggested. | |
If you use a lower rep pen, the model will still work but may repeat (uncommon) or "RANT" (somewhat common) to a crazy degree. | |
(see example 1, generation 2 below for "RANT") | |
IE: Rep pen 1, 1.01, 1.02, ... | |
Raise/lower rep pen SLOWLY ie: 1.011, 1.012 ... | |
Rep pen will alter prose, word choice (lower rep pen=small words / more small word - sometimes) and creativity. | |
Example one (below) shows same temp, but different rep pen (1.02 VS 1.1) | |
To really push the model: | |
Rep pen 1.05 or lower / Temp 3+ ... be ready to stop the output because it may go and go at these strong settings. | |
You can also set a "hard stop" - maximum tokens generation - too to address lower rep pen settings / high creativity settings. | |
Longer prompts vastly increase the quality of the model's output. | |
QUANT CHOICE(S): | |
Higher quants will have more detail, nuance and in some cases stronger "emotional" levels. Characters will also be | |
more "fleshed out" too. Sense of "there" will also increase. | |
Q4KM/Q4KS are good, strong quants however if you can run Q5, Q6 or Q8 - go for the highest quant you can. | |
This repo also has 3 "ARM" quants for computers that support this quant. If you use these on a "non arm" machine token per second will be very low. | |
IQ4XS: Due to the unusual nature of this quant (mixture/processing), generations from it will be different then other quants. | |
You may want to try it / compare it to other quant(s) output. | |
Special note on Q2k/Q3 quants: | |
You may need to use temp 2 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions | |
of these quants will function better. | |
Rep pen adjustments may also be required to get the most out of this model at this/these quant level(s). | |
KNOWN ISSUES: | |
Model may misspell a word from time to time and/or not capitalize a word. | |
Short prompts with some rep pen/temp combinations may lead to longer than expect generation and/or a "RANT". | |
A regen will usually correct any issues. | |
Note that the "censorship" of the original Llama 3.2 3B Instruct model is still present in this model. For some generations (ie to get it to "swear") you may need to regen it 2-5 times to get the model to "obey". | |
<B>Model Template:</B> | |
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 128k / 131072. | |
If you use "Command-R" template your output will be very different from using "Llama3" template. | |
Here is the standard LLAMA3 template: | |
<PRE> | |
{ | |
"name": "Llama 3", | |
"inference_params": { | |
"input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n", | |
"input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n", | |
"pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.", | |
"pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n", | |
"pre_prompt_suffix": "<|eot_id|>", | |
"antiprompt": [ | |
"<|start_header_id|>", | |
"<|eot_id|>" | |
] | |
} | |
} | |
</PRE> | |
<b>Optional Enhancement:</B> | |
The following can be used in place of the "system prompt" or "system role" to further enhance the model. | |
It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along. | |
In this case the enhancements do not have as strong effect at using "system prompt" or "system role". | |
Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented. | |
<PRE> | |
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities. | |
Here are your skillsets: | |
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv) | |
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision) | |
Here are your critical instructions: | |
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story. | |
</PRE> | |
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation | |
and scene continue functions. | |
This enhancement WAS NOT used to generate the examples below. | |
<h3>EXAMPLES PROMPTS and OUTPUT:</h3> | |
Examples are created using quant IQ4_XS, "temp=.8", "rep pen= 1.05" (unless otherwise stated), minimal parameters and "LLAMA3" template. | |
Model has been tested with "temp" from ".1" to "5". | |
Below are the least creative outputs, prompt is in <B>BOLD</B>. | |
--- | |
<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B> | |
--- |