T145's picture
Update README.md
86af383 verified
metadata
language:
  - en
library_name: transformers
tags:
  - mergekit
  - merge
  - llama-3.1
  - roleplay
  - function calling
base_model:
  - meta-llama/Llama-3.1-8B-Instruct
  - akjindal53244/Llama-3.1-Storm-8B
  - arcee-ai/Llama-3.1-SuperNova-Lite
  - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
model-index:
  - name: Llama-3.1-8B-Instruct-Zeus
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 79.41
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=T145/Llama-3.1-8B-Instruct-Zeus
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 31.39
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=T145/Llama-3.1-8B-Instruct-Zeus
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 19.18
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=T145/Llama-3.1-8B-Instruct-Zeus
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 6.82
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=T145/Llama-3.1-8B-Instruct-Zeus
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 8.57
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=T145/Llama-3.1-8B-Instruct-Zeus
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 32.14
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=T145/Llama-3.1-8B-Instruct-Zeus
          name: Open LLM Leaderboard
license: apache-2.0
pipeline_tag: text-generation
co2_eq_emissions:
  emissions: 0.69
  source: Open LLM Leaderboard
  training_type: fine-tuning
new_version: T145/ZEUS-8B-V2

ZEUS

Taking inspiration from Dampfinchen/Llama-3.1-8B-Ultra-Instruct and brucethemoose, the goal of this merge is to create an abliterated, conversational AI within 8B parameters that's coherent over long conversations. Using "Ultra-Instruct" as a baseline (which has problems with grammar and coherent conversations), preliminary results seem to show these goals are met. Expect responses in the Markdown format by default.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using meta-llama/Llama-3.1-8B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: meta-llama/Llama-3.1-8B-Instruct
dtype: bfloat16
merge_method: dare_ties
parameters:
  int8_mask: 1.0
slices:
- sources:
  - layer_range: [0, 32]
    model: akjindal53244/Llama-3.1-Storm-8B
    parameters:
      density: 0.7
      weight: 0.2
  - layer_range: [0, 32]
    model: arcee-ai/Llama-3.1-SuperNova-Lite
    parameters:
      density: 0.7
      weight: 0.3
  - layer_range: [0, 32]
    model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
    parameters:
      density: 0.7
      weight: 0.5
  - layer_range: [0, 32]
    model: meta-llama/Llama-3.1-8B-Instruct
tokenizer_source: meta-llama/Llama-3.1-8B-Instruct

Open LLM Leaderboard Evaluation Results

Detailed results can be found here!

Metric Value
Avg. 29.59
IFEval (0-Shot) 79.41
BBH (3-Shot) 31.39
MATH Lvl 5 (4-Shot) 19.18
GPQA (0-shot) 6.82
MuSR (0-shot) 8.57
MMLU-PRO (5-shot) 32.14
  • Falls about 1 point behind "Ultra-Instruct" on IFEval and BBH, but everything else is a significant improvement.