about

After a lot of "lego merges" to experiment, let's start a basket merge series! Base is the third version of Smarteaz, Smartracks, in which the R1 model is itself a merge between R1, R1 without chinese censorship, and R1 Fallen Llama. That based has shown itself excellent to empower any model thrown at it. Nemotron and Tulu complete the mix.

My 5 favorite L3.3 (Negative Llama, EVA, Dobby, Fallen Llama ofc and Wayfarer) are included in submerges, starting with the well doted Permiscious Prophecy (Including a bit of Sao10K's Euryale 2.2 through the 70Blivion model). Hermes and Tess are also included in submerges, in their abliterated version. Hermes has also its Gutemberg Doppel version. Some abliterated or uncensored L3 are also wrapped in, like Lumitron Abliterated (including some NeverSleep work) or Creative Llama.


benchs

Benchs are traded for creativity in this merge, so :

  • PPL Wikitext Eng 512 : 3.54 (good)
  • ARC-C : 59.20 (good)
  • ARC-E : 80.70 (good also)

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using Nexesenex/Llama_3.x_70b_SmarTracks_V1.01 as a base.

  • The "Smart base" of the model, a 3 levels merge-stock mix of Llama 3.3 abliterated finetuned (the root), Deepseek R1 Distill based Fallen Llama, Nemotron and Tulu.

Models Merged

The following models were included in the merge:

  • For its.. unhinged "personality traits".
  • A balanced healed merge-stock steering with Eva (creativity), Negative Llama (debiasing), L3.1 Oblivion (general intelligence), Open-Bio (anatomy and medicine).
  • To darken the model and the RP scenarios.
  • To highlight and consolidate the R1 capabilities and spice-up/darken the model.
  • A legacy 3.1 merge, led by Tess R1, including the Hermes based Gutemberg Doppel and an uncensored creative finetune.

Configuration

The following YAML configuration was used to produce this model:

merge_method: model_stock
models:
  - model: Nexesenex/Llama_3.1_70b_TearDrops_V1.11
    parameters:
      weight: 1.0
  - model: Black-Ink-Guild/Pernicious_Prophecy_70B
    parameters:
      weight: 1.0
  - model: SentientAGI/Dobby-Unhinged-Llama-3.3-70B
    parameters:
      weight: 1.0
  - model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
    parameters:
      weight: 1.0
  - model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
    parameters:
      weight: 1.0
base_model: Nexesenex/Llama_3.x_70b_SmarTracks_V1.01
dtype: bfloat16
out_dtype: bfloat16
parameters:
  int8_mask: true
  normalize: true
  rescale: false
chat_template: auto
tokenizer:
  source: union
Downloads last month
19
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Nexesenex/Llama_3.x_70b_Hexagon_Purple_V1