|
--- |
|
license: cc-by-4.0 |
|
--- |
|
|
|
# #llama-3 #roleplay |
|
|
|
> [!IMPORTANT] |
|
> Version 2 files uploaded! |
|
|
|
GGUF-IQ-Imatrix quants for [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3). |
|
|
|
> [!IMPORTANT] |
|
> These quants have already been done after the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920). <br> |
|
> Use **KoboldCpp version 1.64** or higher. |
|
|
|
> [!NOTE] |
|
> **Prompt formatting...** <br> |
|
> Prompt format is relatively simple, author seems to recommend the **Default** context preset and **Instruct Mode - Disabled**. <br> |
|
> I recommend reading original [**model card page information**](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3#prompt-format-chat--the-default-ooba-template-and-silly-tavern-template-). |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/VNpZl0O7dpwWLK8i5RG5d.png) |
|
|
|
# Original model information by the author: |
|
|
|
Now not overtrained and with the tokenizer fix to base llama3. Trained for 3 epochs. |
|
|
|
The latest TheSpice, dipped in Mama Liz's LimaRP Oil. |
|
I've focused on making the model more flexible and provide a more unique experience. |
|
I'm still working on cleaning up my dataset, but I've shrunken it down a lot to focus on a "less is more" approach. |
|
This is ultimate a return to form of the way I used to train Thespis, with more of a focus on a small hand edited dataset. |
|
|
|
|
|
## Datasets Used |
|
|
|
* Capybara |
|
* Claude Multiround 30k |
|
* Augmental |
|
* ToxicQA |
|
* Yahoo Answers |
|
* Airoboros 3.1 |
|
* LimaRP |
|
|
|
## Features ( Examples from 0.1.1 because I'm too lazy to take new screenshots. Its tested tho. ) |
|
|
|
Narration |
|
|
|
If you request information on objects or characters in the scene, the model will narrate it to you. Most of the time, without moving the story forward. |
|
|
|
# You can look at anything mostly as long as you end it with "What do I see?" |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/VREY8QHtH6fCL0fCp8AAC.png) |
|
|
|
# You can also request to know what a character is thinking or planning. |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/U3RTAgbaB2m1ygfZGJ-SM.png) |
|
|
|
# You can ask for a quick summary on the character as well. |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/uXFd6GhnXS8w_egUEfcAp.png) |
|
|
|
# Before continuing the conversation as normal. |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/dYTQUdCshUDtp_BJ20tHy.png) |
|
|
|
## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template ) |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/59vi4VWP2d0bCbsW2eU8h.png) |
|
|
|
If you're using Ooba in verbose mode as a server, you can check if you're console is logging something that looks like this. |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/mB3wZqtwN8B45nR7W1fgR.png) |
|
|
|
``` |
|
{System Prompt} |
|
|
|
Username: {Input} |
|
BotName: {Response} |
|
Username: {Input} |
|
BotName: {Response} |
|
|
|
``` |
|
## Presets |
|
|
|
All screenshots above were taken with the below SillyTavern Preset. |
|
## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.05) |
|
This is a roughly equivalent Kobold Horde Preset. |
|
## Recommended Kobold Horde Preset -> MinP |
|
|
|
|
|
# Disclaimer |
|
|
|
Please prompt responsibly and take anything outputted by any Language Model with a huge grain of salt. Thanks! |