File size: 1,895 Bytes
36c42c6
 
 
 
 
 
 
 
 
573eceb
36c42c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44dff50
0ff0dfb
 
 
44dff50
0ff0dfb
 
 
44dff50
0ff0dfb
 
 
 
 
 
 
 
44dff50
0ff0dfb
 
 
 
44dff50
0ff0dfb
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: apache-2.0
tags:
- merge
---

# mistral-7b-merged-ties

mistral-7b-merged-ties is a merge of the following models:


## 🧩 Configuration

```yaml
models:
    - model: mistralai/Mistral-7B-v0.1  
    - model: OpenPipe/mistral-ft-optimized-1218
      parameters:
        density: 0.5  # density gradient
        weight: 0.3
    - model: mlabonne/NeuralHermes-2.5-Mistral-7B
      parameters:
        density: 0.5
        weight: 0.3  # weight gradient
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  normalize: true
dtype: bfloat16

```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mychen76/mistral-7b-merged-ties"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Why the sky is blue"}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mychen76__mistral-7b-merged-ties)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |71.37|
|AI2 Reasoning Challenge (25-Shot)|67.92|
|HellaSwag (10-Shot)              |85.93|
|MMLU (5-Shot)                    |64.07|
|TruthfulQA (0-shot)              |61.31|
|Winogrande (5-shot)              |80.03|
|GSM8k (5-shot)                   |68.54|