zaq-hack commited on
Commit
6e6f260
1 Parent(s): 8a02aff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -6,10 +6,14 @@ base_model:
6
  - Undi95/Llama-3-LewdPlay-8B
7
  library_name: transformers
8
  tags:
9
- - mergekit
 
10
  - merge
11
  ---
12
-
 
 
 
13
  # LewdPlay-8B
14
 
15
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
6
  - Undi95/Llama-3-LewdPlay-8B
7
  library_name: transformers
8
  tags:
9
+ - not-for-all-audiences
10
+ - nsfw
11
  - merge
12
  ---
13
+ * <span style="color:orange">I'm just tinkering. All credit to the original creator: [Undi](https://huggingface.co/Undi95).</span>
14
+ * <span style="color:orange">This model is a more traditionally quantized EXL2 compared to my usual ["rpcal" versions](https://huggingface.co/zaq-hack/Llama-3-LewdPlay-8B-evo-bpw800-h8-exl2-rpcal). Llama-3-8B seems to get markedly dumber by using the "rpcal" method. In previous models, it was difficult to tell, but the margin of error increase from quantizing Llama-3-8B makes it obvious which method is better. I deleted the lower quants of rpcal because they are pretty dumb by comparison. I recommend this version, and this version only for this model. Lower quants are significantly worse.</span>
15
+ * <span style="color:orange">This model: EXL2 @ 8.0 bpw.</span>
16
+ ---
17
  # LewdPlay-8B
18
 
19
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).