merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the della merge method using win10/Mistral-Nemo-abliterated-Nemo-Pro-v2 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:      
  - model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS
    parameters:
      density: 0.8
      weight: 0.8
      
  - model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
    parameters:
      density: 0.8
      weight: 0.8
      
  - model: elinas/Chronos-Gold-12B-1.0
    parameters:
      density: 0.8
      weight: 0.8

  - model: Gryphe/Pantheon-RP-1.5-12b-Nemo
    parameters:
      density: 0.8
      weight: 0.8

  - model: Aleteian/SaigaPersonalityParty
    parameters:
      density: 0.8
      weight: 1.0
      
  - model: Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24
    parameters:
      density: 0.8
      weight: 1.0
      
merge_method: della
base_model: win10/Mistral-Nemo-abliterated-Nemo-Pro-v2

parameters:
  epsilon: 0.10
  lambda: 1.00
  int8_mask: true
  
dtype: float16
chat_template: "chatml"

# Regularization
regularization:
  - method: gradient_penalty
    scale: 0.05  # Increased influence for gradient control
  - method: weight_clipping
    clip_range: [-0.2, 0.2]  # Broader clipping range for flexibility
  - method: random_noise
    scale: 0.01  # Stronger noise injection
  - method: attention_dropout
    scale: 0.1  # Higher dropout to reduce attention fixation

# Postprocessing
postprocessing:
  - operation: entropy_regularization
    scale: 0.05  # Stronger encouragement for diverse outputs
  - operation: non_linear_scaling
    parameters:
      function: tanh
  - operation: sharpening
    intensity: 0.5  # Enhanced sharpening for precise outputs
  - operation: gaussian_smoothing
    sigma: 1.5  # Increased smoothing for stable outputs
  - operation: normalize
  - operation: dynamic_scaling
    scale_range: [0.8, 1.2]  # Expanded dynamic range for scaling
  - operation: smoothing
    parameters:
      adaptive: true
      range: [0.85, 1.15]  # Wider adaptive smoothing range
      kernel_size: 5
Downloads last month
0
Safetensors
Model size
12.2B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Aleteian/dark