Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

MiquMaid v3

Check out our blogpost about this model series Here! - Join our Discord server Here!

This model uses the Alpaca prompting format

Model trained for RP conversation on Miqu-70B with our magic sauce. Then we made an enormous merge containing all out old iteration of Miqumaid, and some other RP Miqu based model, with the new Model Stock merging method.

Credits:

  • Undi
  • IkariDev

Description

This repo contains FP16 files of MiquMaid-v3-70B.

Switch: FP16 - GGUF

Training data used:

Models used

Custom format:

### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}

Mistral [INST][/INST] prompt format should work too.

Others

Undi: If you want to support us, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.