Mag-Picaro-12B

Picaro is all grown up...

Rei Model

✨ Overview

A scaled up version of Mag-Picaro, Funded by PygmalionAI as alternative to their Magnum Large option.

Fine-tuned on top of Qwen-2-Instruct, Mag-Picaro has been then slerp-merged at 50/50 weight with Magnum-V2. If you like the model support me on Ko-Fi https://ko-fi.com/deltavector

📥 Quantized Models

💬 Prompt Format

Magpicaro uses the ChatML format. A typical conversation should be structured as:

<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant

Recommended System Prompt

View Euryale System Prompt

Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n\n\n\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n\n\nFollow the instructions in , avoiding the items listed in .

⚙️ Training

Configuration

View Axolotl Config

https://wandb.ai/new-eden/tavbussy/artifacts/axolotl-config/config-n68z3imh/v0/files/axolotl_config_qhe749gq.yml

Mergekit

View Mergekit Config

https://files.catbox.moe/gjaazp.yml

The model was trained for 4 epochs on 8x NVIDIA H200s GPUs generously provided by @Tav

⚠️ Credits

I'd like to thank, Ruka/Sama twinkman | AliCat | LucyKnada | Kubernetes Bad | PocketDoc | Tav | Trappu | And the rest of Anthracite/Pygmalion for testing, feedback, and support.

Downloads last month
113
Safetensors
Model size
72.7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Delta-Vector/Mag-Picaro-72B

Base model

Qwen/Qwen2-72B
Finetuned
(6)
this model
Quantizations
3 models