Upload README.md
Browse files
README.md
CHANGED
@@ -1,11 +1,24 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
inference: false
|
|
|
|
|
3 |
license: llama2
|
4 |
model_creator: PygmalionAI
|
5 |
model_link: https://huggingface.co/PygmalionAI/mythalion-13b
|
6 |
model_name: Mythalion 13B
|
7 |
model_type: llama
|
|
|
8 |
quantized_by: TheBloke
|
|
|
|
|
|
|
|
|
9 |
---
|
10 |
|
11 |
<!-- header start -->
|
@@ -217,6 +230,57 @@ And thank you again to a16z for their generous grant.
|
|
217 |
<!-- original-model-card start -->
|
218 |
# Original model card: PygmalionAI's Mythalion 13B
|
219 |
|
220 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
221 |
|
222 |
<!-- original-model-card end -->
|
|
|
1 |
---
|
2 |
+
datasets:
|
3 |
+
- PygmalionAI/PIPPA
|
4 |
+
- Open-Orca/OpenOrca
|
5 |
+
- Norquinal/claude_multiround_chat_30k
|
6 |
+
- jondurbin/airoboros-gpt4-1.4.1
|
7 |
+
- databricks/databricks-dolly-15k
|
8 |
inference: false
|
9 |
+
language:
|
10 |
+
- en
|
11 |
license: llama2
|
12 |
model_creator: PygmalionAI
|
13 |
model_link: https://huggingface.co/PygmalionAI/mythalion-13b
|
14 |
model_name: Mythalion 13B
|
15 |
model_type: llama
|
16 |
+
pipeline_tag: text-generation
|
17 |
quantized_by: TheBloke
|
18 |
+
tags:
|
19 |
+
- text generation
|
20 |
+
- instruct
|
21 |
+
thumbnail: null
|
22 |
---
|
23 |
|
24 |
<!-- header start -->
|
|
|
230 |
<!-- original-model-card start -->
|
231 |
# Original model card: PygmalionAI's Mythalion 13B
|
232 |
|
233 |
+
<h1 style="text-align: center">Mythalion 13B</h1>
|
234 |
+
<h2 style="text-align: center">A merge of Pygmalion-2 13B and MythoMax 13B</h2>
|
235 |
+
|
236 |
+
## Model Details
|
237 |
+
|
238 |
+
The long-awaited release of our new models based on Llama-2 is finally here. This model was created in
|
239 |
+
collaboration with [Gryphe](https://huggingface.co/Gryphe), a mixture of our [Pygmalion-2 13B](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
|
240 |
+
and Gryphe's [Mythomax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b).
|
241 |
+
|
242 |
+
Finer details of the merge are available in [our blogpost](https://pygmalionai.github.io/blog/posts/introducing_pygmalion_2/#mythalion-13b).
|
243 |
+
According to our testers, this model seems to outperform MythoMax in RP/Chat. **Please make sure you follow the recommended
|
244 |
+
generation settings for SillyTavern [here](https://pygmalionai.github.io/blog/posts/introducing_pygmalion_2/#sillytavern) for
|
245 |
+
the best results!**
|
246 |
+
|
247 |
+
This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
|
248 |
+
|
249 |
+
|
250 |
+
## Prompting
|
251 |
+
|
252 |
+
This model can be prompted using both the Alpaca and [Pygmalion formatting](https://huggingface.co/PygmalionAI/pygmalion-2-13b#prompting).
|
253 |
+
|
254 |
+
**Alpaca formatting**:
|
255 |
+
```
|
256 |
+
### Instruction:
|
257 |
+
<prompt>
|
258 |
+
|
259 |
+
### Response:
|
260 |
+
<leave a newline blank for model to respond>
|
261 |
+
```
|
262 |
+
|
263 |
+
**Pygmalion/Metharme formatting**:
|
264 |
+
```
|
265 |
+
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
|
266 |
+
{{persona}}
|
267 |
+
|
268 |
+
You shall reply to the user while staying in character, and generate long responses.
|
269 |
+
<|user|>Hello!<|model|>{model's response goes here}
|
270 |
+
|
271 |
+
```
|
272 |
+
|
273 |
+
|
274 |
+
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
|
275 |
+
|
276 |
+
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
|
277 |
+
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
|
278 |
+
form a conversation history.
|
279 |
+
|
280 |
+
## Limitations and biases
|
281 |
+
|
282 |
+
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
|
283 |
+
|
284 |
+
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
285 |
|
286 |
<!-- original-model-card end -->
|