THIS IS THE FINAL MiniMaid-L Series, This is because we've hit the final Ceiling for a 1B model! Thank you so much for your Support!

  • If you loved our Models, then please consider donating and supporting us through Ko-fi!
  • ko-fi

image/png

MiniMaid-L3

  • Introducing MiniMaid-L3 model! Our brand new finetuned MiniMaid-L2 Architecture, allowing for an Even More Coherent and Immersive Roleplay through the Use of Knowledge distillation!

  • MiniMaid-L3 is a Small Update to L2, Which uses Knowledge distillation to combine our L2 Architecture, and A Popular Roleplaying Model named MythoMax, which also uses a Combanant Technology to Combine models and create MythoMax-7B, MiniMaid-L3 on the other hand is a distillation of MiniMaid-L2, combined with using MythoMax Knowledge Distillation, which created MiniMaid-L3, a More Capable Model that Outcompete its descendance in both roleplaying scenarios And even Knock MiniMaid-L2's BLEU scoring!

MiniMaid-L1 Base-Model Card Procedure:

  • MiniMaid-L1 achieve a good Performance through process of DPO and Combined Heavy Finetuning, To Prevent Overfitting, We used high LR decays, And Introduced Randomization techniques to prevent the AI from learning and memorizing, However since training this on Google Colab is difficult, the Model might underperform or underfit on specific tasks Or overfit on knowledge it manage to latched on! However please be guided that we did our best, and it will improve as we move onwards!

  • MiniMaid-L3 is Another Instance of Our Smallest Model Yet! if you find any issue, then please don't hesitate to email us at: [email protected] about any overfitting, or improvements for the future Model V4, Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its Dataset, then please handle it with care and ethical considerations

  • MiniMaid-L3 is

    • Developed by: N-Bot-Int
    • License: apache-2.0
    • Parent Model from model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-1bit
    • Dataset Combined Using: NKDProtoc(Propietary Software)
  • MiniMaid-L3 Official Metric Score image/png

    • Metrics Made By ItsMeDevRoland Which compares:

      • MiniMaid-L2 GGUFF
      • MiniMaid-L3 GGUFF Which are All Ranked with the Same Prompt, Same Temperature, Same Hardware(Google Colab), To Properly Showcase the differences and strength of the Models
    • Visit Below to See details!


🧵 MiniMaid-L3: Slower Steps, Deeper Stories — The Immersive Upgrade

"She’s more grounded, more convincing — and when it comes to roleplay, she’s in a league of her own." image/png


MiniMaid-L3 doesn’t just iterate — she elevates. Built on L2’s disciplined architecture, L3 doubles down on character immersion and emotional coherence, refining every line she delivers.

  • 💬 Roleplay Evaluation (v2)
  • 🧠 Character Consistency: 0.54 → 0.55 (+)
  • 🌊 Immersion: 0.59 → 0.66 (↑)
  • 🎭 Overall RP Score: 0.72 → 0.75

    L3’s immersive depth marks a new high in believability and emotional traction — she's not just playing a part, she becomes it.

📊 Slower, But Smarter

  • 🕒 Inference Time: 39.1s (↑ from 34.5s)
  • ⚡ Tokens/sec: 6.61 (slight dip)
  • 📏 BLEU/ROUGE-L: Mixed — slight BLEU gain, ROUGE-L softened

    Sure, she takes her time — but it’s worth it. L3 trades a few milliseconds for measured, thoughtful outputs that stick the landing every time.

🎯 Refined Roleplay, Recalibrated Goals

  • MiniMaid-L3 isn’t trying to be the fastest. She’s here to be real — holding character, deepening immersion, and generating stories that linger.
  • 🛠️ Designed For:
    • Narrative-focused deployments
    • Long-form interaction and memory retention
    • Low-size, high-fidelity simulation

“MiniMaid-L3 sacrifices a bit of speed to speak with soul. She’s no longer just reacting — she’s inhabiting. It’s not about talking faster — it’s about meaning more.”

MiniMaid-L3 is the slow burn that brings the fire.


  • Notice

    • For a Good Experience, Please use
      • Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
  • Detail card:

    • Parameter

      • 1 Billion Parameters
      • (Please visit your GPU Vendor if you can Run 1B models)
    • Finetuning tool:

    • Unsloth AI

      • This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
    • Fine-tuned Using:

    • Google Colab

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for N-Bot-Int/MiniMaid-L3

Adapter
(2)
this model

Datasets used to train N-Bot-Int/MiniMaid-L3

Collection including N-Bot-Int/MiniMaid-L3