Triangle104 commited on
Commit
fac911c
1 Parent(s): 9b8bf25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -6,12 +6,42 @@ tags:
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
 
9
  ---
10
 
11
  # Triangle104/Hermes-Dolphin-out-Q4_K_S-GGUF
12
  This model was converted to GGUF format from [`harkov000/Hermes-Dolphin-out`](https://huggingface.co/harkov000/Hermes-Dolphin-out) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/harkov000/Hermes-Dolphin-out) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
@@ -50,4 +80,4 @@ Step 3: Run inference through the main binary.
50
  or
51
  ```
52
  ./llama-server --hf-repo Triangle104/Hermes-Dolphin-out-Q4_K_S-GGUF --hf-file hermes-dolphin-out-q4_k_s.gguf -c 2048
53
- ```
 
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
9
+ license: llama3.1
10
  ---
11
 
12
  # Triangle104/Hermes-Dolphin-out-Q4_K_S-GGUF
13
  This model was converted to GGUF format from [`harkov000/Hermes-Dolphin-out`](https://huggingface.co/harkov000/Hermes-Dolphin-out) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/harkov000/Hermes-Dolphin-out) for more details on the model.
15
 
16
+ ---
17
+ Model details:
18
+ -
19
+ This is a merge of pre-trained language models created using mergekit.
20
+ Merge Method
21
+
22
+ This model was merged using the SLERP merge method.
23
+
24
+ Models Merged
25
+ -
26
+ The following models were included in the merge:
27
+
28
+ NousResearch/Hermes-3-Llama-3.1-8B
29
+ cognitivecomputations/dolphin-2.9.4-llama3.1-8b
30
+
31
+ Configuration
32
+ -
33
+ The following YAML configuration was used to produce this model:
34
+
35
+ models:
36
+ - model: NousResearch/Hermes-3-Llama-3.1-8B
37
+ - model: cognitivecomputations/dolphin-2.9.4-llama3.1-8b
38
+ merge_method: slerp
39
+ base_model: NousResearch/Hermes-3-Llama-3.1-8B
40
+ parameters:
41
+ t: [0.0, 0.5, 1.0]
42
+ dtype: bfloat16
43
+
44
+ ---
45
  ## Use with llama.cpp
46
  Install llama.cpp through brew (works on Mac and Linux)
47
 
 
80
  or
81
  ```
82
  ./llama-server --hf-repo Triangle104/Hermes-Dolphin-out-Q4_K_S-GGUF --hf-file hermes-dolphin-out-q4_k_s.gguf -c 2048
83
+ ```