monika-ddlc-11.5b-v1:

  • LLaMA-3 11.5b fine-tuned for Monika character from DDLC (test)
  • Fine-tuned on a dataset of ~600+ items (dialogue scraped from game, reddit, and Twitter augmented by l2-7b-monika-v0.3c1 to turn each into snippets of multi-turn chat dialogue between Player and Monika; this was then manually edited, with more manually crafted items including info about character added in)
  • GGUFs

USAGE

This is meant to be mainly a chat model with limited RP ability.

For best results: replace "Human" and "Assistant" with "Player" and "Monika" like so:

\nPlayer: (prompt)\nMonika:

HYPERPARAMS

  • Trained for 2 epochs
  • rank: 32
  • lora alpha: 32
  • lora dropout: 0.5
  • lr: 2e-4
  • batch size: 2
  • warmup ratio: 0.1
  • grad steps: 4

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

WARNINGS AND DISCLAIMERS

This model is meant to closely reflect the characteristics of Monika. Despite this, there is always the chance that "Monika" will hallucinate and get information about herself wrong or act out of character.

Additionally, being character-focused means that this model may not be the smartest model/not as capable as others for specific tasks.

Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk!

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 65.73
AI2 Reasoning Challenge (25-Shot) 60.07
HellaSwag (10-Shot) 78.77
MMLU (5-Shot) 66.36
TruthfulQA (0-shot) 47.62
Winogrande (5-shot) 75.77
GSM8k (5-shot) 65.81
Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train 922CA/Llama-3-monika-ddlc-11.5b-v1

Evaluation results