asiansoul's picture
Update README.md
43cffb8 verified
|
raw
history blame
7.95 kB
metadata
base_model:
  - beomi/Llama-3-KoEn-8B-Instruct-preview
  - saltlux/Ko-Llama3-Luxia-8B
  - cognitivecomputations/dolphin-2.9-llama3-8b
  - NousResearch/Meta-Llama-3-8B
  - nvidia/Llama3-ChatQA-1.5-8B
  - aaditya/Llama3-OpenBioLLM-8B
  - Danielbrdz/Barcenas-Llama3-8b-ORPO
  - beomi/Llama-3-KoEn-8B-preview
  - abacusai/Llama-3-Smaug-8B
  - NousResearch/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
  - mergekit
  - merge
  - llama

YACHT-Llama-3-KoEn-8B-GGUF

Screenshot-2024-05-07-at-3-04-45-AM

🎡 [JayLee LLMs Signature Tag] : ✍️ "I need a Jay Jay chat boy" 🎡

✨ Navigating the High Seas of Data: Crafting the Ultimate Yacht Insights with Merged LLMs ✨

✨ Aren’t you sometimes tired of just doing LLM & RAG & Normal Chat app? I'll show you a cool app soon integrating this my merged one(Tuned car). It wouldn't be fun if we only developed cars, so life is ultimately about driving cars and socializing with people. ✨

🧨 When using the Merge model for commercial purposes, a lot of care is needed. A mix of many models can be good, but it can also pose many risks. 🧨

Your donation makes me feel more free in life. Instead, I will provide you with fun and useful software!!!

I haven't even released 0.001% of the software to you yet!!!

So I'm asking.....

Do you love me? 'Cause I love all of you to visit here

"Donation(ETH/USDT) : 0x8BB117dD4Cc0E19E5536ab211070c0dE039a85c0"

Can you borrow your computer power to merge below 3 with my one cuz my com said that he is sick -> DM me!! (code ready)

xtuner/llava-llama-3-8b-transformers + asiansoul/YACHT-Llama-3-KoEn-8B + gradientai/Llama-3-8B-Instruct-262k

Diff calculated for model.layers.13.self_attn.q_proj.weight
Diff calculated for model.layers.13.self_attn.k_proj.weight
Diff calculated for model.layers.13.self_attn.v_proj.weight
Diff calculated for model.layers.13.self_attn.o_proj.weight
Diff calculated for model.layers.13.mlp.gate_proj.weight
Diff calculated for model.layers.13.mlp.up_proj.weight
Diff calculated for model.layers.13.mlp.down_proj.weight
Diff calculated for model.layers.13.input_layernorm.weight
Diff calculated for model.layers.13.post_attention_layernorm.weight
Diff calculated for model.layers.14.self_attn.q_proj.weight
Diff calculated for model.layers.14.self_attn.k_proj.weight
Diff calculated for model.layers.14.self_attn.v_proj.weight
Diff calculated for model.layers.14.self_attn.o_proj.weight
Diff calculated for model.layers.14.mlp.gate_proj.weight
Diff calculated for model.layers.14.mlp.up_proj.weight
Diff calculated for model.layers.14.mlp.down_proj.weight
Diff calculated for model.layers.14.input_layernorm.weight
Diff calculated for model.layers.14.post_attention_layernorm.weight

(.venv) jaylee@lees-MacBook-Pro-2 merge % /opt/homebrew/Cellar/[email protected]/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

ModelFile

jaylee@lees-MacBook-Pro-2 youtube % ./ollama create yahht_v1 -f ./gguf/hf_hub/Modelfile_Q5_K_M 
transferring model data 
creating model layer 
creating template layer 
creating system layer 
creating parameters layer 
creating config layer 
using already created layer sha256:2def4bed3c8fe78c1698dd1231f7171d0d2bf486f32f176363355407ade95662 
using already created layer sha256:a6f440dc634252d55a441bbe8710e373456b37a594eb309b765e4ddb03b0872c 
using already created layer sha256:ae2974c64ea5d6f488eeb1b10717a270f48fb3452432589db6f5e60472ae96ac 
writing layer sha256:65439eabce8efb44a58abcb17283801575b1dab665fc6827fb671ce2ab9dc68f 
writing layer sha256:8f48c0791f5b8c657d90f572145ab01ece1309cd1538e25209d2190a9bab5c83 
writing manifest 
success 
FROM yacht-llama-3-koen-8b-Q5_K_M.gguf
TEMPLATE """
{{- if .System }}
system
<s>{{ .System }}</s>
{{- end }}
user
<s>Human:
{{ .Prompt }}</s>
assistant
<s>Assistant:
"""
SYSTEM """
μΉœμ ˆν•œ μ±—λ΄‡μœΌλ‘œμ„œ μƒλŒ€λ°©μ˜ μš”μ²­μ— μ΅œλŒ€ν•œ μžμ„Έν•˜κ³  μΉœμ ˆν•˜κ²Œ λ‹΅ν•˜μž. λͺ¨λ“  λŒ€λ‹΅μ€ ν•œκ΅­μ–΄(Korean)으둜 λŒ€λ‹΅ν•΄μ€˜.
"""
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER num_ctx 4096
PARAMETER stop "<s>"
PARAMETER stop "</s>"

Merge Method

This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: NousResearch/Meta-Llama-3-8B
  - model: NousResearch/Meta-Llama-3-8B-Instruct
    parameters:
      density: 0.60  
      weight: 0.25  
  
  - model: beomi/Llama-3-KoEn-8B-preview
    parameters:
      density: 0.55  
      weight: 0.2
  
  - model: saltlux/Ko-Llama3-Luxia-8B
    parameters:
      density: 0.55  
      weight: 0.15
  
  - model: beomi/Llama-3-KoEn-8B-Instruct-preview
    parameters:
      density: 0.55  
      weight: 0.15 
  - model: nvidia/Llama3-ChatQA-1.5-8B
    parameters:
      density: 0.55  
      weight: 0.1  
  - model: cognitivecomputations/dolphin-2.9-llama3-8b
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: Danielbrdz/Barcenas-Llama3-8b-ORPO
    parameters:
      density: 0.55  
      weight: 0.05
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: aaditya/Llama3-OpenBioLLM-8B
    parameters:
      density: 0.55  
      weight: 0.1 
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  int8_mask: true
dtype: bfloat16

Test

Screenshot-2024-05-07-at-2-45-38-AM Screenshot-2024-05-07-at-2-45-55-AM Screenshot-2024-05-07-at-2-46-11-AM Screenshot-2024-05-07-at-2-46-18-AM Screenshot-2024-05-07-at-2-51-23-AM Screenshot-2024-05-07-at-2-46-48-AM Screenshot-2024-05-07-at-2-49-45-AM