WIP of retouched alpindale/magnum-72b-v1 but I will not use "Magnum" in the name. Call it FinalMix!

Found some issues, trying to fix them for my own usage and adding more RP data with merging.

You can do your own quantized files with the imatrix.dat file done with "wiki.train.raw".

Credits to Alpin and the gang for magnum-72b-v1, and Ikari for his datasets.

Prompt template ChatML

<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
Downloads last month
11
GGUF
Model size
72.7B params
Architecture
qwen2

3-bit

4-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Undi95/MG-FinalMix-72B-GGUF