Norquinal commited on
Commit
9330b96
·
1 Parent(s): b755893

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -7,9 +7,7 @@ datasets:
7
  - OpenLeecher/Teatime
8
  - PygmalionAI/PIPPA
9
  ---
10
- This is the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned using QLoRA (4-bit precision) on my [claude_multiround_chat_1k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_1k) dataset, which is a randomized subset of ~1000 samples from my [claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k) dataset.
11
-
12
- Do not take this model very seriously, it is probably not very good. I haven't a clue of what I'm doing. I just thought it was a fun thing to make.
13
 
14
  ## Prompt Format
15
  The model was finetuned with a prompt format similar to the original SuperHOT prototype:
@@ -21,6 +19,9 @@ characters:
21
  summary: [scenario]
22
  ---
23
  <chat_history>
 
 
 
24
  ```
25
 
26
  ## Use in Text Generation Web UI
 
7
  - OpenLeecher/Teatime
8
  - PygmalionAI/PIPPA
9
  ---
10
+ This is the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned using QLoRA (4-bit precision) on 5800 samples from the datasets:
 
 
11
 
12
  ## Prompt Format
13
  The model was finetuned with a prompt format similar to the original SuperHOT prototype:
 
19
  summary: [scenario]
20
  ---
21
  <chat_history>
22
+ Format:
23
+ [Name]: [Message]
24
+ Human: [Message]
25
  ```
26
 
27
  ## Use in Text Generation Web UI