ABX-AI commited on
Commit
3356903
1 Parent(s): 145b174

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Nitral-AI/Infinitely-Laydiculous-7B
4
+ library_name: transformers
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+ - not-for-all-audiences
9
+ ---
10
+ # GGUF / IQ / Imatrix for [Infinite-Laymons-9B](https://huggingface.co/ABX-AI/Infinite-Laymons-9B)
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/_Tgq278Uqjns2W0ug1kXE.png)
12
+
13
+ **Why Importance Matrix?**
14
+
15
+ **Importance Matrix**, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy.
16
+ The **Imatrix** performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.
17
+
18
+ Related discussions in Github:
19
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
20
+
21
+ The imatrix.txt file that I used contains general, semi-random data, with some added extra kink.
22
+
23
+ # Infinite-Laymons-9B
24
+
25
+ This model is intended for fictional role-play and storytelling.
26
+ The focus is on original responses and elimitation of refusals.
27
+
28
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
29
+
30
+ ## Merge Details
31
+ ### Merge Method
32
+
33
+ First two models were merged with SLERP, and the final model was merged using the passthrough merge method.
34
+
35
+ ### Models Merged
36
+
37
+ The following models were included in the merge:
38
+ * [Nitral-AI/Infinitely-Laydiculous-7B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-7B)
39
+ * ABX-AI/./MODELS/Infinite-Laymons-7B
40
+
41
+ ### Configuration
42
+
43
+ The following YAML configuration was used to produce this model:
44
+
45
+ ```yaml
46
+ slices:
47
+ - sources:
48
+ - model: Nitral-AI/Infinitely-Laydiculous-7B
49
+ layer_range: [0, 20]
50
+ - sources:
51
+ - model: ./MODELS/Infinite-Laymons-7B
52
+ layer_range: [12, 32]
53
+ merge_method: passthrough
54
+ dtype: float16
55
+ ```