asiansoul's picture
Upload folder using huggingface_hub
dc2c517 verified
|
raw
history blame
3.11 kB
metadata
base_model:
  - maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
  - beomi/Llama-3-KoEn-8B-Instruct-preview
  - asiansoul/Llama-3-Open-Ko-Linear-8B
  - NousResearch/Meta-Llama-3-8B
  - NousResearch/Meta-Llama-3-8B-Instruct
  - ajibawa-2023/Code-Llama-3-8B
  - defog/llama-3-sqlcoder-8b
  - NousResearch/Hermes-2-Pro-Llama-3-8B
  - Locutusque/llama-3-neural-chat-v2.2-8B
  - asiansoul/Joah-Llama-3-KoEn-8B-Coder-v1
library_name: transformers
tags:
  - mergekit
  - merge

Joah-Llama-3-KoEn-8B-Coder-v2

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: NousResearch/Meta-Llama-3-8B
    # Base model providing a general foundation without specific parameters

  - model: NousResearch/Meta-Llama-3-8B-Instruct
    parameters:
      density: 0.60  
      weight: 0.25  
  
  - model: beomi/Llama-3-KoEn-8B-Instruct-preview
    parameters:
      density: 0.55  
      weight: 0.15  
  
  - model: asiansoul/Llama-3-Open-Ko-Linear-8B
    parameters:
      density: 0.55  
      weight: 0.1  

  - model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
    parameters:
      density: 0.55  
      weight: 0.1 

  - model: asiansoul/Joah-Llama-3-KoEn-8B-Coder-v1
    parameters:
      density: 0.55  
      weight: 0.2
      
  - model: ajibawa-2023/Code-Llama-3-8B
    parameters:
      density: 0.55  
      weight: 0.05  

  - model: defog/llama-3-sqlcoder-8b
    parameters:
      density: 0.55  
      weight: 0.1  

  - model: Locutusque/llama-3-neural-chat-v2.2-8B
    parameters:
      density: 0.55  
      weight: 0.1 

  - model: NousResearch/Hermes-2-Pro-Llama-3-8B
    parameters:
      density: 0.55  
      weight: 0.05 

merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  int8_mask: true
dtype: bfloat16