|
--- |
|
base_model: [] |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
--- |
|
|
|
# Nemomix-v4.0-12B - EXL2 8bpw max |
|
|
|
This is a 8bpw EXL2 quant of [MarinaraSpaghetti/Nemomix-v4.0-12B](https://huggingface.co/MarinaraSpaghetti/Nemomix-v4.0-12B) |
|
|
|
This quant was made using exllamav2-0.1.8 with default dataset. I used a slightly modified quantization script to force use of highest bpw methods for all layers in the model (which is usually "1:8b_128g s4") to ensure max quality. |
|
|
|
I also added a small fix in config file to set max default context at 128k as original Mistral-Nemo should have. |
|
|
|
I tested this quant shortly in some random RPs (including ones over 8k context) and it seems to work fine. |
|
|
|
## Prompt Templates |
|
|
|
Uses Mistral format. |
|
|
|
### Original readme below |
|
|
|
--- |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/Hj686vH4WgD7ILybOQObi.jpeg) |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/qC-j_gzwjykZGbkLUq2PH.png) |
|
|
|
# The best one so far out of all the Nemomixes. Use this one. |
|
|
|
## Information |
|
### Description |
|
|
|
My main goal is to merge the smartness of the base Instruct Nemo with the better prose from the different roleplaying fine-tunes. This one seems to be the best out of all, so far. All credits and thanks go to Intervitens, Mistralai, Invisietch, and NeverSleep for providing amazing models used in the merge. |
|
|
|
### Instruct |
|
|
|
Mistral Instruct. |
|
|
|
``` |
|
<s>[INST] {system} [/INST]{assistant}</s>[INST] {user} [/INST] |
|
``` |
|
|
|
### Settings |
|
|
|
Lower Temperature of 0.35 recommended, although I had luck with Temperatures above one (1.0-1.2) if you crank up the Min P (0.01-0.1). Run with base DRY of 0.8/1.75/2/0 and you're good to go. |
|
|
|
### Presets |
|
|
|
You can use my custom context/instruct/parameters presets for the model from here. |
|
|
|
https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main |
|
|
|
### GGUF |
|
|
|
https://huggingface.co/MarinaraSpaghetti/Nemomix-v4.0-12B-GGUF |
|
|
|
### Other Versions |
|
|
|
V1: https://huggingface.co/MarinaraSpaghetti/Nemomix-v1.0-12B |
|
|
|
V2: https://huggingface.co/MarinaraSpaghetti/Nemomix-v2.0-12B |
|
|
|
V3: https://huggingface.co/MarinaraSpaghetti/Nemomix-v3.0-12B |
|
|
|
V4: https://huggingface.co/MarinaraSpaghetti/Nemomix-v4.0-12B |
|
|
|
# Nemomix-v0.4-12B |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the della_linear merge method using F:\mergekit\mistralaiMistral-Nemo-Base-2407 as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* F:\mergekit\intervitens_mini-magnum-12b-v1.1 |
|
* F:\mergekit\mistralaiMistral-Nemo-Instruct-2407 |
|
* F:\mergekit\invisietch_Atlantis-v0.1-12B |
|
* F:\mergekit\NeverSleepHistorical_lumi-nemo-e2.0 |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: F:\mergekit\invisietch_Atlantis-v0.1-12B |
|
parameters: |
|
weight: 0.16 |
|
density: 0.4 |
|
- model: F:\mergekit\mistralaiMistral-Nemo-Instruct-2407 |
|
parameters: |
|
weight: 0.23 |
|
density: 0.5 |
|
- model: F:\mergekit\NeverSleepHistorical_lumi-nemo-e2.0 |
|
parameters: |
|
weight: 0.27 |
|
density: 0.6 |
|
- model: F:\mergekit\intervitens_mini-magnum-12b-v1.1 |
|
parameters: |
|
weight: 0.34 |
|
density: 0.8 |
|
merge_method: della_linear |
|
base_model: F:\mergekit\mistralaiMistral-Nemo-Base-2407 |
|
parameters: |
|
epsilon: 0.05 |
|
lambda: 1 |
|
int8_mask: true |
|
dtype: bfloat16 |
|
``` |
|
|
|
## Ko-fi |
|
### Enjoying what I do? Consider donating here, thank you! |
|
https://ko-fi.com/spicy_marinara |