--- base_model: [] library_name: transformers tags: - mergekit - merge --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/DxZNdV33EVq6cK6_gwSqS.jpeg) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/sPI_QHGXE_egmQXTsYkld.png) # Try NemoReRemix here! https://huggingface.co/MarinaraSpaghetti/NemoReRemix-12B # Information ## Details New merge of NeMo based models, thankfully this time with ChatML format. My goal was to create a smart and universal roleplaying model that is stable on higher contexts. So far seems to be better than my best Nemomix attempts, especially on the 64k+ context I've been using. All credits and thanks go to the amazing Gryphe, MistralAI, Anthracite, Sao10K and ShuttleAI for their amazing models. ## Instruct ChatML but Mistral Instruct should work too (theoretically). ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {message}<|im_end|> <|im_start|>assistant {response}<|im_end|> ``` ## Parameters I recommend running Temperature 1.0-1.2 with 0.1 Top A or 0.01-0.1 Min P, and with 0.8/1.75/2/0 DRY. Also works with lower Temperatures below 1.0. Nothing more needed. ### Settings You can use my exact settings from here (use the ones from the ChatML Base/Customized folder): https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main. ## GGUF https://huggingface.co/MarinaraSpaghetti/NemoRemix-12B-GGUF # NemoRemix-v4.0-12B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della_linear merge method using F:\mergekit\mistralaiMistral-Nemo-Base-2407 as a base. ### Models Merged The following models were included in the merge: * F:\mergekit\mistralaiMistral-Nemo-Instruct-2407 * F:\mergekit\Gryphe_Pantheon-RP-1.5-12b-Nemo * F:\mergekit\shuttleai_shuttle-2.5-mini * F:\mergekit\Sao10K_MN-12B-Lyra-v1 * F:\mergekit\anthracite-org_magnum-12b-v2 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: F:\mergekit\Gryphe_Pantheon-RP-1.5-12b-Nemo parameters: weight: 0.1 density: 0.3 - model: F:\mergekit\mistralaiMistral-Nemo-Instruct-2407 parameters: weight: 0.12 density: 0.4 - model: F:\mergekit\Sao10K_MN-12B-Lyra-v1 parameters: weight: 0.2 density: 0.5 - model: F:\mergekit\shuttleai_shuttle-2.5-mini parameters: weight: 0.25 density: 0.6 - model: F:\mergekit\anthracite-org_magnum-12b-v2 parameters: weight: 0.33 density: 0.8 merge_method: della_linear base_model: F:\mergekit\mistralaiMistral-Nemo-Base-2407 parameters: epsilon: 0.05 lambda: 1 dtype: bfloat16 ``` # Ko-fi ## Enjoying what I do? Consider donating here, thank you! https://ko-fi.com/spicy_marinara