File size: 2,414 Bytes
99bcea0
 
 
6936cea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99bcea0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: cc-by-nc-4.0
---
## Exl2 version of [maywell/PiVoT-MoE](https://huggingface.co/maywell/PiVoT-MoE)  

## branch  
main : 2.4bpw h8  
3bh8 : 3bpw h8  
4bh8 : 4bpw h8  
6bh8 : 6bpw h8  
8bh8 : 8bpw h8  

Using ThePile [0007.parquet](https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated/resolve/refs%2Fconvert%2Fparquet/default/train/0007.parquet) as dataset  
 
Quantization settings : ```python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp -cf PiVoT-MoE-8bpw-h8-exl2 -c 0007.parquet -l 8192 -b 8 -hb 8 -ml 8192```  
```python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp2 -cf PiVoT-MoE-6bpw-h8-exl2 -c 0007.parquet -l 8192 -b 6 -hb 8 -m PiVoT-MoE-temp/measurement.json -ml 8192```  
```python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp3 -cf PiVoT-MoE-4bpw-h8-exl2 -c 0007.parquet -l 8192 -b 4 -hb 8 -m PiVoT-MoE-temp/measurement.json -ml 8192```  
```python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp4 -cf PiVoT-MoE-3bpw-h8-exl2 -c 0007.parquet -l 8192 -b 3 -hb 8 -m PiVoT-MoE-temp/measurement.json -ml 8192```  
```python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp5 -cf PiVoT-MoE-2.4bpw-h8-exl2 -c 0007.parquet -l 8192 -b 2.4 -hb 8 -m PiVoT-MoE-temp/measurement.json -ml 8192```  
### below this line is original readme  
# PiVot-MoE
![img](./PiVoT-MoE.png)

## Model Description

PiVoT-MoE, is an advanced AI model specifically designed for roleplaying purposes. It has been trained using a combination of four 10.7B sized experts, each with their own specialized characteristic, all fine-tuned to bring a unique and diverse roleplaying experience.

The Mixture of Experts (MoE) technique is utilized in this model, allowing the experts to work together synergistically, resulting in a more cohesive and natural conversation flow. The MoE architecture allows for a higher level of flexibility and adaptability, enabling PiVoT-MoE to handle a wide variety of roleplaying scenarios and characters.

Based on the PiVoT-10.7B-Mistral-v0.2-RP model, PiVoT-MoE takes it a step further with the incorporation of the MoE technique. This means that not only does the model have an expansive knowledge base, but it also has the ability to mix and match its expertise to better suit the specific roleplaying scenario.

## Prompt Template - Alpaca (ChatML works)
```
{system}
### Instruction:
{instruction}
### Response:
{response}
```