pave

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using AGI-0/smartllama3.1-8B-001 + Azazelle/Nimue-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: AGI-0/smartllama3.1-8B-001+Azazelle/Nimue-8B
chat_template: llama3
dtype: float32
merge_method: model_stock
parameters:
  int8_mask: 1.0
slices:
- sources:
  - layer_range: [0, 32]
    model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
  - layer_range: [0, 32]
    model: flammenai/Mahou-1.3-llama3.1-8B
  - layer_range: [0, 32]
    model: AGI-0/smartllama3.1-8B-001+Azazelle/Nimue-8B
tokenizer:
  pad_to_multiple_of: 4
Downloads last month
14
Safetensors
Model size
8.03B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kromcomp/L3.1-Pavev2-8B