MN-12B-Starsong-v1 / README.md
Auri
Update README.md
148bcb3 verified
|
raw
history blame
1.96 kB
metadata
base_model:
  - nothingiisreal/MN-12B-Celeste-V1.9
  - Sao10K/MN-12B-Lyra-v1
library_name: transformers
tags:
  - mergekit
  - merge

Mistral Nemo 12B Starsong

This is a merge of pre-trained language models created using mergekit. Just messing around, thought this one turned out distinct enough? YMMV on that, but I kinda liked it. It's definitely better for SFW than for NSFW tho. Seems to be stabler with Mistral formatting? Weird, both merged models were trained with ChatML. I guess we chalk it up to another merging anomaly, heh.
Unlicensed bc I got enlightened (lmao) and don't want to license machine output anymore.
Thanks to Sao for Lyra btw, really liking the direction where those experimental models are heading. Keep it up!

Static GGUF (by Mradermacher)
EXL2 (by kingbri of RoyalLab)

Merge Details

Merge Method

This model was merged using the TIES merge method using nothingiisreal/MN-12B-Celeste-V1.9 as a base.

Merge Fodder

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Sao10K/MN-12B-Lyra-v1
    parameters:
      density: 0.45
      weight: 0.5
  - model: nothingiisreal/MN-12B-Celeste-V1.9
    parameters:
      density: 0.65
      weight: 0.5

merge_method: ties
base_model: nothingiisreal/MN-12B-Celeste-V1.9
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16