Text Generation
Transformers
PyTorch
mistral
Not-For-All-Audiences
nsfw
text-generation-inference
Inference Endpoints
Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,8 @@ datasets:
|
|
4 |
- jondurbin/airoboros-gpt4-1.4
|
5 |
- Squish42/bluemoon-fandom-1-1-rp-cleaned
|
6 |
- totally-not-an-llm/EverythingLM-data-V2-sharegpt
|
|
|
|
|
7 |
---
|
8 |
This is the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned using QLoRA (4-bit precision) on my [claude_multiround_chat_1k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_1k) dataset, which is a randomized subset of ~1000 samples from my [claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k) dataset.
|
9 |
|
|
|
4 |
- jondurbin/airoboros-gpt4-1.4
|
5 |
- Squish42/bluemoon-fandom-1-1-rp-cleaned
|
6 |
- totally-not-an-llm/EverythingLM-data-V2-sharegpt
|
7 |
+
- OpenLeecher/Teatime
|
8 |
+
- PygmalionAI/PIPPA
|
9 |
---
|
10 |
This is the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned using QLoRA (4-bit precision) on my [claude_multiround_chat_1k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_1k) dataset, which is a randomized subset of ~1000 samples from my [claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k) dataset.
|
11 |
|