icefog72 commited on
Commit
36f9065
1 Parent(s): f16949a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -48
README.md CHANGED
@@ -1,48 +1,26 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # IceSakeRPTrainingTestV11-7b
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the SLERP merge method.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * G:\FModels\NewFolder
22
-
23
- ### Configuration
24
-
25
- The following YAML configuration was used to produce this model:
26
-
27
- ```yaml
28
- slices:
29
- - sources:
30
- - model: G:\FModels\NewFolder
31
- layer_range: [0, 32]
32
- - model: G:\FModels\NewFolder
33
- layer_range: [0, 32]
34
-
35
- merge_method: slerp
36
- base_model: G:\FModels\NewFolder
37
- parameters:
38
- t:
39
- - filter: self_attn
40
- value: [0, 0.5, 0.3, 0.7, 1]
41
- - filter: mlp
42
- value: [1, 0.5, 0.7, 0.3, 0]
43
- - value: 0.5 # fallback for rest of tensors
44
-
45
- dtype: bfloat16
46
-
47
-
48
- ```
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ - alpaca
8
+ - mistral
9
+ - not-for-all-audiences
10
+ - nsfw
11
+ base_model:
12
+ - icefog72/IceSakeRP-7b
13
+
14
+ ---
15
+ # IceSakeRPTrainingTestV1-7b
16
+
17
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63407b719dbfe0d48b2d763b/PtPya8KK3BeD56yt53asU.png)
18
+
19
+
20
+ The model should handle 25-32k context window size.
21
+
22
+ [ST Discord thread of model - feedback](https://discord.com/channels/1100685673633153084/1259898635194339389)
23
+
24
+ [Rules-lorebook and settings I'm using you can find here ('By model' folder)](https://huggingface.co/icefog72/GeneralInfoToStoreNotModel/tree/main)
25
+
26
+ [ko-fi To buy sweets for my cat :3](https://ko-fi.com/icefog72)