Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,16 @@ quantized_by: Suparious
|
|
15 |
- Model creator: [flammenai](https://huggingface.co/flammenai)
|
16 |
- Original model: [flammen22-mistral-7B](https://huggingface.co/flammenai/flammen22-mistral-7B)
|
17 |
|
|
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
## How to use
|
21 |
|
|
|
15 |
- Model creator: [flammenai](https://huggingface.co/flammenai)
|
16 |
- Original model: [flammen22-mistral-7B](https://huggingface.co/flammenai/flammen22-mistral-7B)
|
17 |
|
18 |
+
![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png)
|
19 |
|
20 |
+
## Model Summary
|
21 |
+
|
22 |
+
A Mistral 7B LLM built from merging pretrained models and finetuning on [Doctor-Shotgun/theory-of-mind-dpo](https://huggingface.co/datasets/Doctor-Shotgun/theory-of-mind-dpo).
|
23 |
+
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
|
24 |
+
|
25 |
+
Finetuned using an A100 on Google Colab.
|
26 |
+
|
27 |
+
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
|
28 |
|
29 |
## How to use
|
30 |
|