DavidAU commited on
Commit
63bb8b7
·
verified ·
1 Parent(s): 6509b6f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -35,8 +35,6 @@ tags:
35
  pipeline_tag: text-generation
36
  ---
37
 
38
- (2 large examples below (1,2,3 and 4 experts output shown per example))
39
-
40
  <B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. HORROR. Swearing. UNCENSORED... humor, romance, fun. </B>
41
 
42
  <h2>Mistral-MOE-4X7B-Dark-MultiVerse-24B-GGUF</h2>
@@ -118,6 +116,9 @@ You can set the number of experts in LMStudio (https://lmstudio.ai) at the "load
118
 
119
  For Text-Generation-Webui (https://github.com/oobabooga/text-generation-webui) you set the number of experts at the loading screen page.
120
 
 
 
 
121
  For server.exe / Llama-server.exe (Llamacpp - https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md )
122
  add the following to the command line to start the "llamacpp server" (CLI):
123
 
 
35
  pipeline_tag: text-generation
36
  ---
37
 
 
 
38
  <B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. HORROR. Swearing. UNCENSORED... humor, romance, fun. </B>
39
 
40
  <h2>Mistral-MOE-4X7B-Dark-MultiVerse-24B-GGUF</h2>
 
116
 
117
  For Text-Generation-Webui (https://github.com/oobabooga/text-generation-webui) you set the number of experts at the loading screen page.
118
 
119
+ For KolboldCPP (https://github.com/LostRuins/koboldcpp) Version 1.8+ , on the load screen, click on "TOKENS",
120
+ you can set experts on this page, and the launch the model.
121
+
122
  For server.exe / Llama-server.exe (Llamacpp - https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md )
123
  add the following to the command line to start the "llamacpp server" (CLI):
124