Text Generation
Transformers
PyTorch
mistral
Not-For-All-Audiences
nsfw
text-generation-inference
Inference Endpoints
Update README.md
Browse files
README.md
CHANGED
@@ -7,9 +7,7 @@ datasets:
|
|
7 |
- OpenLeecher/Teatime
|
8 |
- PygmalionAI/PIPPA
|
9 |
---
|
10 |
-
This is the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned using QLoRA (4-bit precision) on
|
11 |
-
|
12 |
-
Do not take this model very seriously, it is probably not very good. I haven't a clue of what I'm doing. I just thought it was a fun thing to make.
|
13 |
|
14 |
## Prompt Format
|
15 |
The model was finetuned with a prompt format similar to the original SuperHOT prototype:
|
@@ -21,6 +19,9 @@ characters:
|
|
21 |
summary: [scenario]
|
22 |
---
|
23 |
<chat_history>
|
|
|
|
|
|
|
24 |
```
|
25 |
|
26 |
## Use in Text Generation Web UI
|
|
|
7 |
- OpenLeecher/Teatime
|
8 |
- PygmalionAI/PIPPA
|
9 |
---
|
10 |
+
This is the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned using QLoRA (4-bit precision) on 5800 samples from the datasets:
|
|
|
|
|
11 |
|
12 |
## Prompt Format
|
13 |
The model was finetuned with a prompt format similar to the original SuperHOT prototype:
|
|
|
19 |
summary: [scenario]
|
20 |
---
|
21 |
<chat_history>
|
22 |
+
Format:
|
23 |
+
[Name]: [Message]
|
24 |
+
Human: [Message]
|
25 |
```
|
26 |
|
27 |
## Use in Text Generation Web UI
|