Edit model card

insanity

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the della_linear merge method using ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1 as a base.

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: natong19/Mistral-Nemo-Instruct-2407-abliterated
  - model: Fizzarolli/MN-12b-Sunrose
    parameters:
      density: 0.5
      weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
  - model: nbeerbower/mistral-nemo-gutenberg-12B-v4
    parameters:
      density: [0.35, 0.65, 0.5, 0.65, 0.35]
      weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
merge_method: dare_ties
base_model: natong19/Mistral-Nemo-Instruct-2407-abliterated
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16
name: uncen

---
models:
  - model: unsloth/Mistral-Nemo-Instruct-2407
  - model: NeverSleep/Lumimaid-v0.2-12B
    parameters:
      density: 0.5
      weight: [0.139, 0.208, 0.139, 0.208, 0.139]
  - model: nbeerbower/mistral-nemo-cc-12B
    parameters:
      density: [0.65, 0.35, 0.5, 0.35, 0.65]
      weight: [0.01823, -0.01647, 0.01422, -0.01975, 0.01128]
  - model: nbeerbower/mistral-nemo-bophades-12B
    parameters:
      density: [0.35, 0.65, 0.5, 0.65, 0.35]
      weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
merge_method: della
base_model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
  epsilon: 0.04
  lambda: 1.05
  normalize: false
  int8_mask: true
dtype: bfloat16
name: conv
---
models:
  - model: unsloth/Mistral-Nemo-Base-2407
  - model: elinas/Chronos-Gold-12B-1.0
    parameters:
      density: 0.9
      gamma: 0.01
      weight: [0.139, 0.208, 0.208, 0.139, 0.139]
  - model: shuttleai/shuttle-2.5-mini
    parameters:
      density: 0.9
      gamma: 0.01
      weight: [0.208, 0.139, 0.139, 0.139, 0.208]
  - model: Epiculous/Violet_Twilight-v0.2
    parameters:
      density: 0.9
      gamma: 0.01
      weight: [0.139, 0.139, 0.208, 0.208, 0.139]
merge_method: breadcrumbs_ties
base_model: unsloth/Mistral-Nemo-Base-2407
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16
name: chatml
---
models:
  - model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
    parameters:
      weight: [0.2, 0.3, 0.2, 0.3, 0.2]
      density: [0.45, 0.55, 0.45, 0.55, 0.45]
  - model: chatml
    parameters:
      weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
      density: [0.6, 0.4, 0.5, 0.4, 0.6]
  - model: uncen
    parameters:
      density: [0.6, 0.4, 0.5, 0.4, 0.6]
      weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
  - model: conv
    parameters:
      weight: [0.208, 0.139, 0.139, 0.139, 0.208]
      density: [0.7]
  - model: v000000/NM-12B-Lyris-dev-3
    parameters:
      weight: [0.33]
      density: [0.45, 0.55, 0.45, 0.55, 0.45]
merge_method: della_linear
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
parameters:
  epsilon: 0.04
  lambda: 1.05
  int8_mask: true
  rescale: true
  normalize: false
dtype: bfloat16
tokenizer_source: base
Downloads last month
14
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Nohobby/Insanity