SnakyMcSnekFace's picture
New version of the model trained with 4096 token context
eeb7db4 verified
|
raw
history blame
No virus
6.32 kB
metadata
license: llama2
language:
  - en
pipeline_tag: text-generation
inference: false
tags:
  - pytorch
  - storywriting
  - finetuned
  - not-for-all-audiences
base_model: KoboldAI/LLaMA2-13B-Psyfighter2
model_type: llama
prompt_template: >
  Below is an instruction that describes a task. Write a response that
  appropriately completes the request.


  ### Instruction:

  {prompt}


  ### Response:

Model Card for Psyfighter2-13B-vore

This model is a version of KoboldAI/LLaMA2-13B-Psyfighter2 finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, a conversational model in a chat, and an interactive choose-your-own-adventure text game.

The Adventure Mode is still work in progress, and will be added later.

This is the FP16-precision version of the model for merging and fine-tuning. For using the model, download the quantized version here instead: SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF

Model Details

Model Description

The model behaves similarly to KoboldAI/LLaMA2-13B-Psyfighter2, which it was derived from. Please see the README.md here to learn more.

This model was fine-tuned on ~55 MiB of free-form text, containing stories focused around the vore theme. As a result, it has a strong vorny bias.

How to Get Started with the Model

The model can be used with any AI chatbots and front-ends designed to work with .gguf models. The model fits fully into 8GB VRAM, but can also run with degraded performance on smaller graphics cards.

Similarly to the base model, the less prompt the model receives, the more creative is the output. For example, the writing assistant will generate an entire story when prompted with only 2-3 words.

In the chat mode, if the conversation is not going where you would like it to go, edit the model's output and let it continue generation. The model will also match the style of the conversation.

Koboldcpp Colab Notebook

The easiest way to try out the model is Koboldcpp Colab Notebook. This method doesn't require you to have a powerful graphics card.

  • Open the notebook
  • Paste the model URL into the field: https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf
  • Start the notebook, wait for the URL to CloudFlare tunnel to appear at the bottom and click it
  • Use the model as a writing assistant
  • You can try an adventure from https://aetherroom.club/, but keep in mind that the model will not let you take turn unless you stop it. Adventure mode is still work-in-progress, but it's getting there.

Backyard AI

Another convenient way to use the model is Backyard AI application, which allows you to run the model locally on your computer. You'll need a graphics card with at least 8GB VRAM to use the model comfortably.

Download directly from HuggingFace (beta)

In the left panel, click Manage Models, then select Hugging face models. Paste https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF into the text field and press Fetch Models. Click Download button to the next to the model format. Once the model is downloaded, you can select it in your character card or set it as a default model.

Download manually

Download the Psyfighter2-13B-vore.Q4_K_M.gguf file into %appdata%\faraday\models folder on your computer. The model should appear in Manage Models menu under Downloaded Models. You can then select it in your character card or set it as a default model.

Model updates

  • 04/13/2024 - uploaded the first version of the model
  • 05/25/2024 - updated training process, making the model more coherent and improving the writing quality

Bias, Risks, and Limitations

By design, this model has a strong vorny bias. It's not intended for use by anyone below 18 years old.

Training Details

This model was fine-tuned on free-form text comprised of stories focused around the vore theme using rank-stabilized QLoRA adapter QLoRA method. The resulting adapter was merged into the FP16 precision base model. The quantized version of the model was prepared using llama.cpp.

Training Procedure

The model was fine-tuned with a rank-stabilized QLoRA adapter on NVIDIA GeForce RTX 4060 Ti over the span of ~24 hours. Training was performed using Unsloth AI library on Ubuntu 22.04.4 LTS with CUDA 12.1 and Pytorch 2.3.0.

LoRa adapter configuration

  • Rank: 128
  • Alpha: 16
  • Dropout rate: 0.1
  • Target weights: ["q_proj", "k_proj", "o_proj", "gate_proj", "up_proj"],
  • use_rslora=True

Training parameters

  • Max. sequence length: 4096 tokens
  • Samples per epoch: 3783
  • Number of epochs: 2
  • Learning rate: 1e-4
  • Warmup: 64 steps
  • LR Schedule: linear
  • Batch size: 1
  • Gradient accumulation steps: 1

Preprocessing

The stories in dataset were pre-processed as follows:

  • titles, foreword, tags, and anything not comprising the text of the story was removed
  • non-ascii characters and character sequences serving as chapter separators were removed
  • any story mentioning underage personas in any context was removed from the dataset
  • names of private characters were replaced with randomized names across the dataset

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: NVIDIA GeForce RTX 4060 Ti
  • Hours used: 24
  • Cloud Provider: N/A
  • Compute Region: US-East
  • Carbon Emitted: 0.83 kg CO2 eq.