Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model:
|
6 |
+
- alpindale/WizardLM-2-8x22B
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
library_name: transformers
|
9 |
+
tags:
|
10 |
+
- chat
|
11 |
+
- creative
|
12 |
+
- writing
|
13 |
+
- roleplay
|
14 |
+
---
|
15 |
+
|
16 |
+
AWQ Quant of [gghfez/WizardLM-2-22b-RP](gghfez/WizardLM-2-22b-RP)
|
17 |
+
|
18 |
+
# gghfez/WizardLM-2-22b-RP
|
19 |
+
|
20 |
+
<img src="https://files.catbox.moe/acl4ld.png" width="400"/>
|
21 |
+
|
22 |
+
⚠️ **IMPORTANT: Experimental Model - Not recommended for Production Use**
|
23 |
+
- This is an experimental model created through bespoke, unorthodox merging techniques
|
24 |
+
- The safety alignment and guardrails from the original WizardLM2 model may be compromised
|
25 |
+
- This model is intended for creative writing and roleplay purposes ONLY
|
26 |
+
- Use at your own risk and with appropriate content filtering in place
|
27 |
+
|
28 |
+
|
29 |
+
This model is an experimental derivative of WizardLM2-8x22B, created by extracting the individual experts from the original mixture-of-experts (MoE) model, renaming the mlp modules to match the Mistral architecture, and merging them into a single dense model using linear merging via mergekit.
|
30 |
+
|
31 |
+
The resulting model initially produced gibberish, but after fine-tuning on synthetic data generated by the original WizardLM2-8x22B, it regained the ability to generate relatively coherent text. However, the model exhibits confusion about world knowledge and mixes up the names of well known people.
|
32 |
+
|
33 |
+
Despite efforts to train the model on factual data, the confusion persisted, so instead I trained it for creative tasks.
|
34 |
+
|
35 |
+
As a result, this model is not recommended for use as a general assistant or for tasks that require accurate real-world knowledge (don't bother running MMLU-Pro on it).
|
36 |
+
|
37 |
+
It actually retrieves details out of context very accurately, but I still can't recommend it for anything other than creative tasks.
|
38 |
+
|
39 |
+
## Prompt format
|
40 |
+
Mistral-v1 + the system tags from Mistral-V7 :
|
41 |
+
```
|
42 |
+
[SYSTEM_PROMPT] {system}[SYSTEM_PROMPT] [INST] {prompt}[/INST]
|
43 |
+
```
|
44 |
+
**NOTE:** This model is based on WizardLM2-8x22B, which is a finetune of Mixtral-8x22B - not to be confused with the more recent Mistral-Small-22B model.
|
45 |
+
As such, it uses the same vocabulary and tokenizer as Mixtral-v0.1 and inherites the Apache2.0 license.
|
46 |
+
I expanded the vocab to include the system prompt and instruction tags before training (including embedding heads).
|
47 |
+
|
48 |
+
## Examples:
|
49 |
+
|
50 |
+
### Strength: Information Extraction from Context
|
51 |
+
[example 1]
|
52 |
+
|
53 |
+
### Weakness: Basic Factual Knowledge
|
54 |
+
[example 2]
|