metadata
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- smaug
- lumimaid
- abliterated
- gradent
- instruct
- arimas
- breadcrums
model
Any feedback regarding the model its behaviour is very welcome. In V1 i experimented with a giraffe base but i noticed the gradient version quickly lossed the long context ability. So in V1.5 i switched back to a gradient base.
Merge Details
Merge Method
This model was merged using the breadcrumbs_ties merge method using \Llama-3-70B-Instruct-Gradient-262k as a base.
Models Merged
The following models were included in the merge:
- \Llama-3-Lumimaid-70B-v0.1-OAS
- \Smaug-Llama-3-70B-Instruct
- \Llama-3-70B-Instruct-abliterated-v3
Configuration
The following YAML configuration was used to produce this model:
models:
- model: \Llama-3-70B-Instruct-Gradient-262k
parameters:
weight: 0.40
density: 0.90
gamma: 0.01
- model: \Llama-3-70B-Instruct-abliterated-v3
parameters:
weight: 0.20
density: 0.90
gamma: 0.01
- model: \Smaug-Llama-3-70B-Instruct
parameters:
weight: 0.40
density: 0.90
gamma: 0.01
- model: \Llama-3-Lumimaid-70B-v0.1-OAS
parameters:
weight: 0.20
density: 0.90
gamma: 0.01
merge_method: breadcrumbs_ties
base_model: \Llama-3-70B-Instruct-Gradient-262k
dtype: bfloat16