mychen76's picture
Update README.md
4c82848 verified
|
raw
history blame
No virus
2.1 kB
metadata
license: apache-2.0
tags:
  - merge

mistral-7b-merged-slerp

mistral-7b-merged-slerp is a merge of the following models:

🧩 Configuration

slices:
  - sources:
      - model: OpenPipe/mistral-ft-optimized-1218
        layer_range: [0, 32]
      - model: mlabonne/NeuralHermes-2.5-Mistral-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

💻 Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mychen76/mistral-7b-merged-slerp"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "why the sky is blue."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 71.09
AI2 Reasoning Challenge (25-Shot) 67.75
HellaSwag (10-Shot) 86.17
MMLU (5-Shot) 64.05
TruthfulQA (0-shot) 59.85
Winogrande (5-shot) 80.19
GSM8k (5-shot) 68.54