merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
        parameters:
          weight: 1
        layer_range: [0, 32]
      - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
        parameters:
          weight: 1
        layer_range: [0, 32]
      - model: CultriX/Llama3-8B-DPO
        parameters:
          weight: 0.3
        layer_range: [0, 32]
      - model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
        parameters:
          weight: 0.7
        layer_range: [0, 32]
merge_method: task_arithmetic
base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
160
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for CultriX/Llama3-8B-function-calling-uncensored-GGUF