merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear DARE merge method using MrRobotoAI/MrRoboto-BASE-v1-8b-64k as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: dare_linear
models:
  - model: MrRobotoAI/MrRoboto-BASE-v1-8b-64k
    parameters:
      weight:
        - filter: v_proj
          value: [0.75, 0.75, 0.6, 0.4, 0.2, 0, 0.2, 0.4, 0.6, 0.75, 0.75]
        - filter: o_proj
          value: [0.75, 0.75, 0.6, 0.4, 0.2, 0, 0.2, 0.4, 0.6, 0.75, 0.75]
        - filter: up_proj
          value: [0.75, 0.75, 0.6, 0.4, 0.2, 0, 0.2, 0.4, 0.6, 0.75, 0.75]
        - filter: gate_proj
          value: [0.75, 0.75, 0.6, 0.4, 0.2, 0, 0.2, 0.4, 0.6, 0.75, 0.75]
        - filter: down_proj
          value: [0.75, 0.75, 0.6, 0.4, 0.2, 0, 0.2, 0.4, 0.6, 0.75, 0.75]
        - value: 1
  - model: MrRobotoAI/Llama-3-8B-Uncensored-0.2
    parameters:
      weight:
        - filter: v_proj
          value: [0.25, 0.25, 0.4, 0.6, 0.8, 1, 0.8, 0.6, 0.4, 0.25, 0.25]
        - filter: o_proj
          value: [0.25, 0.25, 0.4, 0.6, 0.8, 1, 0.8, 0.6, 0.4, 0.25, 0.25]
        - filter: up_proj
          value: [0.25, 0.25, 0.4, 0.6, 0.8, 1, 0.8, 0.6, 0.4, 0.25, 0.25]
        - filter: gate_proj
          value: [0.25, 0.25, 0.4, 0.6, 0.8, 1, 0.8, 0.6, 0.4, 0.25, 0.25]
        - filter: down_proj
          value: [0.25, 0.25, 0.4, 0.6, 0.8, 1, 0.8, 0.6, 0.4, 0.25, 0.25]
        - value: 0
base_model: MrRobotoAI/MrRoboto-BASE-v1-8b-64k
tokenizer_source: base
dtype: bfloat16
Downloads last month
24
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for MrRobotoAI/Llama-3-8B-Uncensored-0.3