aashish1904 commited on
Commit
39a0bc5
1 Parent(s): 42b85ee

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - unsloth/gemma-2-9b-it
6
+ - princeton-nlp/gemma-2-9b-it-SimPO
7
+ - wzhouad/gemma-2-9b-it-WPO-HB
8
+ - nbeerbower/Gemma2-Gutenberg-Doppel-9B
9
+ library_name: transformers
10
+ tags:
11
+ - mergekit
12
+ - merge
13
+
14
+
15
+ ---
16
+
17
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
18
+
19
+
20
+ # QuantFactory/Gemma-2-Ataraxy-Doppel-9B-GGUF
21
+ This is quantized version of [lemon07r/Gemma-2-Ataraxy-Doppel-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-Doppel-9B) created using llama.cpp
22
+
23
+ # Original Model Card
24
+
25
+ # Gemma-2-Ataraxy-Doppel-9B
26
+
27
+ One last test model.. that you should ignore again.
28
+
29
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
30
+
31
+ ## Merge Details
32
+ ### Merge Method
33
+
34
+ This model was merged using the della merge method using [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) as a base.
35
+
36
+ ### Models Merged
37
+
38
+ The following models were included in the merge:
39
+ * [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO)
40
+ * [wzhouad/gemma-2-9b-it-WPO-HB](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB)
41
+ * [nbeerbower/Gemma2-Gutenberg-Doppel-9B](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B)
42
+
43
+ ### Configuration
44
+
45
+ The following YAML configuration was used to produce this model:
46
+
47
+ ```yaml
48
+ base_model: unsloth/gemma-2-9b-it
49
+ dtype: bfloat16
50
+ merge_method: della
51
+ parameters:
52
+ epsilon: 0.1
53
+ int8_mask: 1.0
54
+ lambda: 1.0
55
+ normalize: 1.0
56
+ slices:
57
+ - sources:
58
+ - layer_range: [0, 42]
59
+ model: unsloth/gemma-2-9b-it
60
+ - layer_range: [0, 42]
61
+ model: wzhouad/gemma-2-9b-it-WPO-HB
62
+ parameters:
63
+ density: 0.55
64
+ weight: 0.6
65
+ - layer_range: [0, 42]
66
+ model: princeton-nlp/gemma-2-9b-it-SimPO
67
+ parameters:
68
+ density: 0.35
69
+ weight: 0.6
70
+ - layer_range: [0, 42]
71
+ model: nbeerbower/Gemma2-Gutenberg-Doppel-9B
72
+ parameters:
73
+ density: 0.25
74
+ weight: 0.4
75
+ ```
76
+