--- base_model: - huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 - rombodawg/Rombos-LLM-V2.6-Qwen-14b - nbeerbower/Qwen2.5-Gutenberg-Doppel-14B library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). --- Model details: - ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/66c1cc08453a7ef6c5fe657a/fdMmP6oLy11PYZCBp3t2S.webp) *He'll be back... or something.* Quants can be found here: - https://huggingface.co/mradermacher/Robo-Gutenberg_V1.0-GGUF https://huggingface.co/mradermacher/Robo-Gutenberg_V1.0-i1-GGUF --- ## Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [rombodawg/Rombos-LLM-V2.6-Qwen-14b](https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b) as a base. ### Models Merged The following models were included in the merge: * [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2) * [nbeerbower/Qwen2.5-Gutenberg-Doppel-14B](https://huggingface.co/nbeerbower/Qwen2.5-Gutenberg-Doppel-14B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: rombodawg/Rombos-LLM-V2.6-Qwen-14b #no parameters necessary for base model - model: nbeerbower/Qwen2.5-Gutenberg-Doppel-14B parameters: density: 0.7 weight: 0.7 - model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: rombodawg/Rombos-LLM-V2.6-Qwen-14b parameters: normalize: false int8_mask: true dtype: float16 ```