Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ inference: false
|
|
15 |
</div>
|
16 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
17 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
18 |
-
<p><a href="https://discord.gg/
|
19 |
</div>
|
20 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
21 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
@@ -57,7 +57,7 @@ Open the text-generation-webui UI as normal.
|
|
57 |
|
58 |
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
|
59 |
|
60 |
-
It was created without group_size to minimise VRAM usage, and with `--act-order` to improve inference quality.
|
61 |
|
62 |
* `Wizard-Vicuna-30B-Uncensored-GPTQ-4bit.act-order.safetensors`
|
63 |
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
|
@@ -74,7 +74,7 @@ It was created without group_size to minimise VRAM usage, and with `--act-order`
|
|
74 |
|
75 |
For further support, and discussions on these models and AI in general, join us at:
|
76 |
|
77 |
-
[TheBloke AI's Discord server](https://discord.gg/
|
78 |
|
79 |
## Thanks, and how to contribute.
|
80 |
|
@@ -84,14 +84,14 @@ I've had a lot of people ask if they can contribute. I enjoy providing models an
|
|
84 |
|
85 |
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
86 |
|
87 |
-
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits
|
88 |
|
89 |
* Patreon: https://patreon.com/TheBlokeAI
|
90 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
91 |
|
92 |
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
|
93 |
|
94 |
-
Thank you to all my generous patrons and donaters
|
95 |
<!-- footer end -->
|
96 |
|
97 |
# Original model card
|
@@ -100,12 +100,12 @@ This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) tr
|
|
100 |
|
101 |
Shout out to the open source AI/ML community, and everyone who helped me out.
|
102 |
|
103 |
-
Note:
|
104 |
|
105 |
-
An uncensored model has no guardrails.
|
106 |
|
107 |
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
|
108 |
|
109 |
Publishing anything this model generates is the same as publishing it yourself.
|
110 |
|
111 |
-
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
|
|
|
15 |
</div>
|
16 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
17 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
18 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
19 |
</div>
|
20 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
21 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
|
|
57 |
|
58 |
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
|
59 |
|
60 |
+
It was created without group_size to minimise VRAM usage, and with `--act-order` to improve inference quality.
|
61 |
|
62 |
* `Wizard-Vicuna-30B-Uncensored-GPTQ-4bit.act-order.safetensors`
|
63 |
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
|
|
|
74 |
|
75 |
For further support, and discussions on these models and AI in general, join us at:
|
76 |
|
77 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
78 |
|
79 |
## Thanks, and how to contribute.
|
80 |
|
|
|
84 |
|
85 |
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
86 |
|
87 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
88 |
|
89 |
* Patreon: https://patreon.com/TheBlokeAI
|
90 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
91 |
|
92 |
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
|
93 |
|
94 |
+
Thank you to all my generous patrons and donaters!
|
95 |
<!-- footer end -->
|
96 |
|
97 |
# Original model card
|
|
|
100 |
|
101 |
Shout out to the open source AI/ML community, and everyone who helped me out.
|
102 |
|
103 |
+
Note:
|
104 |
|
105 |
+
An uncensored model has no guardrails.
|
106 |
|
107 |
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
|
108 |
|
109 |
Publishing anything this model generates is the same as publishing it yourself.
|
110 |
|
111 |
+
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
|