--- library_name: transformers tags: - unsloth - trl - sft - not-for-all-audiences license: cc-by-nc-2.0 --- ``` e88 88e d8 d888 888b 8888 8888 ,"Y88b 888 8e d88 C8888 8888D 8888 8888 "8" 888 888 88b d88888 Y888 888P Y888 888P ,ee 888 888 888 888 "88 88" "88 88" "88 888 888 888 888 b 8b, e88'Y88 d8 888 d888 'Y ,"Y88b 888,8, d88 ,e e, 888 C8888 "8" 888 888 " d88888 d88 88b 888 Y888 ,d ,ee 888 888 888 888 , 888 "88,d88 "88 888 888 888 "YeeP" 888 PROUDLY PRESENTS ``` # MN-12B-Tarsus-iMat-GGUF Quantized with love from fp16. Original model author: [envoid](https://huggingface.co/envoid/) * Importance Matrix calculated using [groups_merged.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) in 92 chunks, n_ctx=512, and fp16 precision weights Original model README [here](https://huggingface.co/Envoid/MN-12B-Tarsus/) and below: ## CAUTION: This model was finetuned on a corpus that includes adult content and may produce mature content without warning. ![](https://files.catbox.moe/1k5ama.jpg) # MN-12B-Tarsus Is a full-weight finetune of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) Which underwent several intermediate steps. This finetune was made with chatting/roleplaying via SillyTavern in mind and thus all of the testing was done there, with the goals being to: -Reduce shiver-slop -Make the model more conversationally proactive -Give it a more human-like output (i.e. less gratuitous purple prose) -Reducing overall positivity bias It still responds well to Mistral-Instruct formatting. The results are imperfect and its assistant capabilities suffered somewhat as a result but in quick testing it definitely seems to have achieved all of the goals to varying degrees. It sometimes fumbles with tokens in odd places so it's certainly not perfect. Possibly best used as merge-fodder. Trained using [qlora-pipe](https://github.com/tdrussell/qlora-pipe)