File size: 7,950 Bytes
a76faa8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25485a3
a76faa8
 
 
 
 
 
 
 
 
 
25485a3
cebc2d1
d3b39af
d0ea842
 
ce1a9ae
 
 
3b1d6e7
d3b39af
 
b7ab020
9bde0a2
 
9db30dd
9bde0a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04f862f
 
25485a3
a76faa8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
---
base_model:
- beomi/Llama-3-KoEn-8B-Instruct-preview
- saltlux/Ko-Llama3-Luxia-8B
- cognitivecomputations/dolphin-2.9-llama3-8b
- NousResearch/Meta-Llama-3-8B
- nvidia/Llama3-ChatQA-1.5-8B
- aaditya/Llama3-OpenBioLLM-8B
- Danielbrdz/Barcenas-Llama3-8b-ORPO
- beomi/Llama-3-KoEn-8B-preview
- abacusai/Llama-3-Smaug-8B
- NousResearch/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
- llama

---
# YACHT-Llama-3-KoEn-8B-GGUF

<a href="https://ibb.co/SyT5vj5"><img src="https://i.ibb.co/DWXzKJz/Screenshot-2024-05-07-at-3-04-45-AM.png" alt="Screenshot-2024-05-07-at-3-04-45-AM" border="0"></a>

🎡   *[JayLee LLMs Signature Tag] : ✍️ "I need a Jay Jay chat boy"* 🎡  

✨ *Navigating the High Seas of Data: Crafting the Ultimate Yacht Insights with Merged LLMs* ✨

✨ *Aren’t you sometimes tired of just doing LLM & RAG & Normal Chat app? I'll show you a cool app soon integrating this my merged one(Tuned car). It wouldn't be fun if we only developed cars, so life is ultimately about driving cars and socializing with people.* ✨

🧨 *When using the Merge model for commercial purposes, a lot of care is needed. A mix of many models can be good, but it can also pose many risks.* 🧨

Your donation makes me feel more free in life. Instead, I will provide you with fun and useful software!!!

I haven't even released 0.001% of the software to you yet!!!

So I'm asking.....

Do you love me? 'Cause I love all of you to visit here

"Donation(ETH/USDT) : 0x8BB117dD4Cc0E19E5536ab211070c0dE039a85c0"

Can you borrow your computer power to merge this with my one cuz my com said that he is sick  -> DM me!! (code ready)

```
xtuner/llava-llama-3-8b-transformers + asiansoul/YACHT-Llama-3-KoEn-8B + gradientai/Llama-3-8B-Instruct-262k

Diff calculated for model.layers.13.self_attn.q_proj.weight
Diff calculated for model.layers.13.self_attn.k_proj.weight
Diff calculated for model.layers.13.self_attn.v_proj.weight
Diff calculated for model.layers.13.self_attn.o_proj.weight
Diff calculated for model.layers.13.mlp.gate_proj.weight
Diff calculated for model.layers.13.mlp.up_proj.weight
Diff calculated for model.layers.13.mlp.down_proj.weight
Diff calculated for model.layers.13.input_layernorm.weight
Diff calculated for model.layers.13.post_attention_layernorm.weight
Diff calculated for model.layers.14.self_attn.q_proj.weight
Diff calculated for model.layers.14.self_attn.k_proj.weight
Diff calculated for model.layers.14.self_attn.v_proj.weight
Diff calculated for model.layers.14.self_attn.o_proj.weight
Diff calculated for model.layers.14.mlp.gate_proj.weight
Diff calculated for model.layers.14.mlp.up_proj.weight
Diff calculated for model.layers.14.mlp.down_proj.weight
Diff calculated for model.layers.14.input_layernorm.weight
Diff calculated for model.layers.14.post_attention_layernorm.weight

(.venv) jaylee@lees-MacBook-Pro-2 merge % /opt/homebrew/Cellar/[email protected]/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

```

### ModelFile

```
jaylee@lees-MacBook-Pro-2 youtube % ./ollama create yahht_v1 -f ./gguf/hf_hub/Modelfile_Q5_K_M 
transferring model data 
creating model layer 
creating template layer 
creating system layer 
creating parameters layer 
creating config layer 
using already created layer sha256:2def4bed3c8fe78c1698dd1231f7171d0d2bf486f32f176363355407ade95662 
using already created layer sha256:a6f440dc634252d55a441bbe8710e373456b37a594eb309b765e4ddb03b0872c 
using already created layer sha256:ae2974c64ea5d6f488eeb1b10717a270f48fb3452432589db6f5e60472ae96ac 
writing layer sha256:65439eabce8efb44a58abcb17283801575b1dab665fc6827fb671ce2ab9dc68f 
writing layer sha256:8f48c0791f5b8c657d90f572145ab01ece1309cd1538e25209d2190a9bab5c83 
writing manifest 
success 
```

```
FROM yacht-llama-3-koen-8b-Q5_K_M.gguf
TEMPLATE """
{{- if .System }}
system
<s>{{ .System }}</s>
{{- end }}
user
<s>Human:
{{ .Prompt }}</s>
assistant
<s>Assistant:
"""
SYSTEM """
μΉœμ ˆν•œ μ±—λ΄‡μœΌλ‘œμ„œ μƒλŒ€λ°©μ˜ μš”μ²­μ— μ΅œλŒ€ν•œ μžμ„Έν•˜κ³  μΉœμ ˆν•˜κ²Œ λ‹΅ν•˜μž. λͺ¨λ“  λŒ€λ‹΅μ€ ν•œκ΅­μ–΄(Korean)으둜 λŒ€λ‹΅ν•΄μ€˜.
"""
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER num_ctx 4096
PARAMETER stop "<s>"
PARAMETER stop "</s>"
```

### Merge Method

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.

### Models Merged

The following models were included in the merge:
* [beomi/Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
* [saltlux/Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B)
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
* [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)
* [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
* [Danielbrdz/Barcenas-Llama3-8b-ORPO](https://huggingface.co/Danielbrdz/Barcenas-Llama3-8b-ORPO)
* [beomi/Llama-3-KoEn-8B-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview)
* [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: NousResearch/Meta-Llama-3-8B
  - model: NousResearch/Meta-Llama-3-8B-Instruct
    parameters:
      density: 0.60  
      weight: 0.25  
  
  - model: beomi/Llama-3-KoEn-8B-preview
    parameters:
      density: 0.55  
      weight: 0.2
  
  - model: saltlux/Ko-Llama3-Luxia-8B
    parameters:
      density: 0.55  
      weight: 0.15
  
  - model: beomi/Llama-3-KoEn-8B-Instruct-preview
    parameters:
      density: 0.55  
      weight: 0.15 
  - model: nvidia/Llama3-ChatQA-1.5-8B
    parameters:
      density: 0.55  
      weight: 0.1  
  - model: cognitivecomputations/dolphin-2.9-llama3-8b
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: Danielbrdz/Barcenas-Llama3-8b-ORPO
    parameters:
      density: 0.55  
      weight: 0.05
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: aaditya/Llama3-OpenBioLLM-8B
    parameters:
      density: 0.55  
      weight: 0.1 
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  int8_mask: true
dtype: bfloat16
```

### Test
<a href="https://ibb.co/tHkMB64"><img src="https://i.ibb.co/Zft6drV/Screenshot-2024-05-07-at-2-45-38-AM.png" alt="Screenshot-2024-05-07-at-2-45-38-AM" border="0"></a>
<a href="https://ibb.co/JtsN25B"><img src="https://i.ibb.co/V9qdD2j/Screenshot-2024-05-07-at-2-45-55-AM.png" alt="Screenshot-2024-05-07-at-2-45-55-AM" border="0"></a>
<a href="https://ibb.co/X8JhVMk"><img src="https://i.ibb.co/3zhDkQY/Screenshot-2024-05-07-at-2-46-11-AM.png" alt="Screenshot-2024-05-07-at-2-46-11-AM" border="0"></a>
<a href="https://ibb.co/XFZfvLm"><img src="https://i.ibb.co/qmMLPjb/Screenshot-2024-05-07-at-2-46-18-AM.png" alt="Screenshot-2024-05-07-at-2-46-18-AM" border="0"></a>
<a href="https://ibb.co/7Khf8B3"><img src="https://i.ibb.co/YhgCYJ5/Screenshot-2024-05-07-at-2-51-23-AM.png" alt="Screenshot-2024-05-07-at-2-51-23-AM" border="0"></a>
<a href="https://ibb.co/9tsDj40"><img src="https://i.ibb.co/sHyfr14/Screenshot-2024-05-07-at-2-46-48-AM.png" alt="Screenshot-2024-05-07-at-2-46-48-AM" border="0"></a>
<a href="https://ibb.co/z8FmLRz"><img src="https://i.ibb.co/HNCKMzj/Screenshot-2024-05-07-at-2-49-45-AM.png" alt="Screenshot-2024-05-07-at-2-49-45-AM" border="0"></a>