Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,15 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
|
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/vwcJfOnL-2QDJ0ShfxRJ5.png)
|
7 |
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
4 |
+
I'm just tinkering. All credit to the original creators: Noromaid is hot.
|
5 |
|
6 |
+
"rpcal" designates that this model was quantized using an [RP-specific data set](https://huggingface.co/datasets/royallab/PIPPA-cleaned) instead of the standard llama data set.</br>
|
7 |
+
I have been unable to quantify real differences in the same model "compressed" using these two different methods. My current theory is that it gives "good responses" just as often as a similarly quantized model, however, good responses are "subjectively better" with this method. Any help quantifying this would be appreciated. [Anyone know Ayumi?](https://ayumi.m8geil.de/erp4_chatlogs/?S=erv3_0#!/index) </br>
|
8 |
+
|
9 |
+
This model: EXL2 @ 3.5 bpw using RP data for calibration.</br>
|
10 |
+
This was [requested here.](https://huggingface.co/zaq-hack/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-bpw300-h6-exl2-rpcal/discussions/1)
|
11 |
+
|
12 |
+
(Personal SillyTavern settings included in the "ST" directory. Your mileage may vary.)
|
13 |
|
14 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/vwcJfOnL-2QDJ0ShfxRJ5.png)
|
15 |
|