File size: 3,157 Bytes
5462be9 f62f876 e961d1c 00e25f4 5462be9 00e25f4 5462be9 00e25f4 5462be9 00e25f4 5462be9 00e25f4 5462be9 00e25f4 5462be9 357c798 33edec4 00e25f4 f438d65 61e47c0 f438d65 00e25f4 707447a 4d5bcab 06d2f5a 707447a 00e25f4 357c798 5462be9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
<div style="width: auto; margin-left: auto; margin-right: auto; margin-bottom: 3cm">
<img src="https://huggingface.co/FluffyKaeloky/Luminum-v0.1-123B/resolve/main/LuminumCover.png" alt="Luminum" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
# LuminumMistral-123B
## Overview
I present Luminum-123B.
This is a merge using Mistral Large as a base, and including Lumimaid-v0.2-123B and Magnum-v2-123B.
I felt like Magnum was rambling too much, and Lumimaid lost slightly too much brain power, so I used Mistral Large base for a long while, but it was lacking some moist.
On a whim, I decided to merge both Lumimaid and Magnum on top of Mistral large, and while I wasn't expecting much, I've been very surprised with the results. I've found that this model keeps the brain power from Mistral base, and also inherits the lexique of Lumimaid and creative descriptions of Magnum, without rambling too much.
I've tested this model quite extensively at and above 32k with great success. It should in theory allow for the full 128k context, albeit I've only went to 40-50k max.
It's become my new daily driver.
The only negative thing I could find is that it tends to generate long responses if you let it. It probably gets that from Magnum. Just don't let it grow its answer size over and over.
I recommend thoses settings:
- Minp: 0.08
- Rep penalty: 1.03
- Rep penalty range : 4096
- Smoothing factor: 0.23
- No Repeat NGram Size: 2 *
*I didn't get the chance to mess with DRY yet.
## Template
All the merged models use Mistral template, this one too.
```
<s>[INST] {input} [/INST] {output}</s>
```
-------------------
# Quants
## GGUF
* [Static quants](https://huggingface.co/mradermacher/Luminum-v0.1-123B-GGUF)
* [IMat](https://huggingface.co/mradermacher/Luminum-v0.1-123B-i1-GGUF)
## EXL2
* [4.0bpw](https://huggingface.co/FluffyKaeloky/Luminum-v0.1-123B-exl2-4.0bpw)
* [5.0bpw](https://huggingface.co/Proverbial1/Luminum-v0.1-123B_exl2_5.0bpw_h8)
* [5.5bpw](https://huggingface.co/denru/Luminum-v0.1-123B-5_5bpw-h6-exl2)
* [6.0bpw](https://huggingface.co/BigHuggyD/FluffyKaeloky_Luminum-v0.1-123B_exl2_6.0bpw_h6)
* [7.0bpw](https://huggingface.co/BigHuggyD/FluffyKaeloky_Luminum-v0.1-123B_exl2_7.0bpw_h8)
* [8.0bpw](https://huggingface.co/BigHuggyD/FluffyKaeloky_Luminum-v0.1-123B_exl2_8.0bpw_h8)
-------------------
### Merge Method
This model was merged using the della_linear merge method using mistralaiMistral-Large-Instruct-2407 as a base.
### Models Merged
The following models were included in the merge:
* NeverSleepLumimaid-v0.2-123B
* anthracite-orgmagnum-v2-123b
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: anthracite-orgmagnum-v2-123b
parameters:
weight: 0.19
density: 0.5
- model: NeverSleepLumimaid-v0.2-123B
parameters:
weight: 0.34
density: 0.8
merge_method: della_linear
base_model: mistralaiMistral-Large-Instruct-2407
parameters:
epsilon: 0.05
lambda: 1
int8_mask: true
dtype: bfloat16
```
|