metadata
license: other
tags:
- merge
- mergekit
- lazymergekit
- llama
base_model:
- NousResearch/Meta-Llama-3-8B-Instruct
- mlabonne/OrpoLlama-3-8B
- Locutusque/Llama-3-Orca-1.0-8B
- abacusai/Llama-3-Smaug-8B
ChimeraLlama-3-8B
ChimeraLlama-3-8B outperforms Llama 3 8B Instruct on Nous' benchmark suite.
ChimeraLlama-3-8B is a merge of the following models using LazyMergekit:
- NousResearch/Meta-Llama-3-8B-Instruct
- mlabonne/OrpoLlama-3-8B
- Locutusque/Llama-3-Orca-1.0-8B
- abacusai/Llama-3-Smaug-8B
π Evaluation
Nous
Evaluation performed using LLM AutoEval, see the entire leaderboard here.
Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
---|---|---|---|---|---|
mlabonne/ChimeraLlama-3-8B π | 51.58 | 39.12 | 71.81 | 52.4 | 42.98 |
meta-llama/Meta-Llama-3-8B-Instruct π | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
mlabonne/OrpoLlama-3-8B π | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 |
meta-llama/Meta-Llama-3-8B π | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
𧩠Configuration
models:
- model: NousResearch/Meta-Llama-3-8B
# No parameters necessary for base model
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.58
weight: 0.4
- model: mlabonne/OrpoLlama-3-8B
parameters:
density: 0.52
weight: 0.2
- model: Locutusque/Llama-3-Orca-1.0-8B
parameters:
density: 0.52
weight: 0.2
- model: abacusai/Llama-3-Smaug-8B
parameters:
density: 0.52
weight: 0.2
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: float16
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/ChimeraLlama-3-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])