--- base_model: - EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 - ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3 - Sao10K/32B-Qwen2.5-Kunou-v1 - rombodawg/Rombos-LLM-V2.5-Qwen-32b library_name: transformers tags: - mergekit - merge language: - en --- # Chuluun-Qwen2.5-32B-v0.01 ![image/png](https://huggingface.co/DatToad/Chuluun-Qwen2.5-32B-v0.01/resolve/main/00004-3953116841-2.png) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This merge is largely the same datasets that went into the 72B v0.01, but since Tess and Magnum aren't available as TQ2.5 32B I substituted Rombos instead as the base model and ArliAI's RPMax for Magnum. Testers have reported a similar experience to the 72B, which is high praise indeed for a model half the size. Q4_K_S or equivalent BPW is extremely usable with good context on a single 24GB card. I don't do v1 releases because of just how quickly LLMs and the scene move, and as a rule one model may or may not be better than another for what and how you write. 32B is a stronger RP model than storywriter but that's to be expected from a mid-size model. There's some debate as to how much Rombos adds to the mix compared to base Qwen, or even the abliterated versions. Since the goal of Chuluun is to blend uncensored intelligence with strong storywriting/eRP capabilities I am open to suggestions for good base models that might do this (a Tess or Athene or even a Dolphin built off of TQ2.5 would be sweet). [Konnect's Qwenception](https://huggingface.co/Konnect1221/The-Inception-Presets-Methception-LLamaception-Qwenception) presets are a good starting point for this model. If the model randomly breaks into Chinese, consider adding TopK of 200 to your samplers. ChatML prompt formatting. ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b as a base. ### Models Merged The following models were included in the merge: * EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 * ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3 * Sao10K/32B-Qwen2.5-Kunou-v1 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 - model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3 - model: Sao10K/32B-Qwen2.5-Kunou-v1 merge_method: model_stock base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b parameters: filter_wise: false dtype: bfloat16 name: DatToad/Chuluun-Qwen2.5-32B-v0.01 ``` ### Thank Yous! Credit as always to the original model makers, as well as to Allura-org (now my org, omgthankyou!) for all their support, and also to the testers in the ArliAI Discord for their suggestions and feedback.