PotatoB commited on
Commit
8abb8cb
·
verified ·
1 Parent(s): 69c687a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -29
README.md CHANGED
@@ -3,34 +3,6 @@ license: apache-2.0
3
  tags:
4
  - merge
5
  - mergekit
6
- - lazymergekit
7
- - mistralai/Mistral-7B-Instruct-v0.2
8
- - meta-math/MetaMath-Mistral-7B
9
  ---
10
 
11
- # evo_exp-point-1-1
12
-
13
- evo_exp-point-1-1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
14
- * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
15
- * [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
16
-
17
- ## 🧩 Configuration
18
-
19
- ```yaml
20
- slices:
21
- - sources:
22
- - model: mistralai/Mistral-7B-Instruct-v0.2
23
- layer_range: [0, 32]
24
- - model: meta-math/MetaMath-Mistral-7B
25
- layer_range: [0, 32]
26
- merge_method: slerp
27
- base_model: mistralai/Mistral-7B-Instruct-v0.2
28
- parameters:
29
- t:
30
- - filter: self_attn
31
- value: [0, 0.5, 0.3, 0.7, 1]
32
- - filter: mlp
33
- value: [1, 0.5, 0.7, 0.3, 0]
34
- - value: 0.5
35
- dtype: bfloat16
36
- ```
 
3
  tags:
4
  - merge
5
  - mergekit
 
 
 
6
  ---
7
 
8
+ This is an open model for iterative merging experiments.