Text Generation
GGUF
English
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
story
writing
fiction
roleplaying
horror
general usage
roleplay
neo quant
fantasy
story telling
ultra high precision
Inference Endpoints
imatrix
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -119,6 +119,33 @@ Suggestions for this model:
|
|
119 |
- IQ2s will also be effective due to sheer number of parameters (35 billion) in the model. (see examples below)
|
120 |
- Q4s and Q5s will still be strong, with Q6 being medium to low relatively speaking in terms of "horror" changes. This is due to how Imatrix process affects quants of different bit sizes - lower, is stronger, higher is weaker. Again, these are relative.
|
121 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
122 |
<b>Optional Enhancement:</B>
|
123 |
|
124 |
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
|
|
|
119 |
- IQ2s will also be effective due to sheer number of parameters (35 billion) in the model. (see examples below)
|
120 |
- Q4s and Q5s will still be strong, with Q6 being medium to low relatively speaking in terms of "horror" changes. This is due to how Imatrix process affects quants of different bit sizes - lower, is stronger, higher is weaker. Again, these are relative.
|
121 |
|
122 |
+
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
|
123 |
+
|
124 |
+
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
|
125 |
+
|
126 |
+
Set the "Smoothing_factor" to 1.5 to 2.5
|
127 |
+
|
128 |
+
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
|
129 |
+
|
130 |
+
: in text-generation-webui -> parameters -> lower right.
|
131 |
+
|
132 |
+
: In Silly Tavern this is called: "Smoothing"
|
133 |
+
|
134 |
+
|
135 |
+
NOTE: For "text-generation-webui"
|
136 |
+
|
137 |
+
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
|
138 |
+
|
139 |
+
Source versions (and config files) of my models are here:
|
140 |
+
|
141 |
+
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
|
142 |
+
|
143 |
+
OTHER OPTIONS:
|
144 |
+
|
145 |
+
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
|
146 |
+
|
147 |
+
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
|
148 |
+
|
149 |
<b>Optional Enhancement:</B>
|
150 |
|
151 |
The following can be used in place of the "system prompt" or "system role" to further enhance the model.
|