Phi-SoSerious-Mini-V1

image/png

Let's put a smile on that face!

This is a finetune of https://huggingface.co/microsoft/Phi-3-mini-4k-instruct trained on a variant of the Kobble Dataset. Training was done in under 4 hours on a single Nvidia RTX 3090 GPU with qLora (LR 1.2e-4, rank 16, alpha 16, batch size 3, gradient acc. 3, 2048 ctx).

You can obtain the GGUF quantization of this model here: https://huggingface.co/concedo/Phi-SoSerious-Mini-V1-GGUF

Dataset and Objectives

The Kobble Dataset is a semi-private aggregated dataset made from multiple online sources and web scrapes, augmented with some synthetic data. It contains content chosen and formatted specifically to work with KoboldAI software and Kobold Lite. The objective of this model was to produce a usable version of Phi-3-mini usable for storywriting, conversations and instructions, and without excess tendency for refusal.

Dataset Categories:

  • Instruct: Single turn instruct examples presented in the Alpaca format, with an emphasis on uncensored and unrestricted responses.
  • Chat: Two participant roleplay conversation logs in a multi-turn raw chat format that KoboldAI uses.
  • Story: Unstructured fiction excerpts, including literature containing various erotic and provocative content.

Prompt template: Alpaca

### Instruction:
{prompt}

### Response:

Note: No assurances will be provided about the origins, safety, or copyright status of this model, or of any content within the Kobble dataset.
If you belong to a country or organization that has strict AI laws or restrictions against unlabelled or unrestricted content, you are advised not to use this model.

Downloads last month
13
Safetensors
Model size
3.82B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.