File size: 1,920 Bytes
f4e49f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
- grimjim/llama-3-Nephilim-v3-8B
library_name: transformers
pipeline_tag: text-generation
tags:
- mergekit
- merge
license: llama3.1
---
# Llama-Nephilim-Metamorphosis-v2-8B

This repo contains a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

A coherent Llama 3 model (composed of fine-tunes based on Instruct) was merged at low weight into a Llama 3.1 Instruct model. No fine-tuning was performed afterward. The resulting model is mostly coherent for direct chat and text generation, retaining long context capability of 3.1.
A gradient merge was used at the ends and the the embed_tokens and lm_head layers retained from 3.1, which should better preserve handling of context above 8K tokens.

Testing has been performed out to 16K context, using temperature 1 and minP 0.01. Safety remains mostly intact.

Built with Llama.

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
* [grimjim/llama-3-Nephilim-v3-8B](https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
base_model: meta-llama/Llama-3.1-8B-Instruct
dtype: bfloat16
merge_method: slerp
slices:
- sources:
  - model: meta-llama/Llama-3.1-8B-Instruct
    layer_range: [0, 32]
  - model: grimjim/llama-3-Nephilim-v3-8B
    layer_range: [0, 32]
    value: [0.0, 0.02, 0.04, 0.06, 0.08, 0.1, 0.1, 0.1,
      0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
      0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
      0.1, 0.1, 0.1, 0.08, 0.06, 0.04, 0.02, 0.0]
parameters:
  t:
    - filter: embed_tokens
      value: 0.0
    - filter: lm_head
      value: 0.0
    - value: 0.1

```