--- base_model: - knifeayumu/Cydonia-v1.2-Magnum-v4-22B language: - en license: other license_name: mrl license_link: https://mistral.ai/licenses/MRL-0.1.md library_name: transformers --- ## Llamacpp Quantizations of knifeayumu/Cydonia-v1.2-Magnum-v4-22B Using [llama.cpp](https://github.com/ggerganov/llama.cpp/) release [b3982](https://github.com/ggerganov/llama.cpp/releases/tag/b3982) for quantization. Original model: [knifeayumu/Cydonia-v1.2-Magnum-v4-22B](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) ## Quant Types: | Filename | Quant type | File Size | | -------- | ---------- | --------- | | [Cydonia-v1.2-Magnum-v4-22B-F16.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-F16.gguf) | F16 | 44.5 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q8_0.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q8_0.gguf) | Q8_0 | 23.6 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q6_K.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q6_K.gguf) | Q6_K | 18.3 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q5_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q5_K_M.gguf) | Q5_K_M | 15.7 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q5_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q5_K_S.gguf) | Q5_K_S | 15.3 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q4_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q4_K_M.gguf) | Q4_K_M | 13.3 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q4_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q4_K_S.gguf) | Q4_K_S | 12.7 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q3_K_L.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q3_K_L.gguf) | Q3_K_L | 11.7 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q3_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q3_K_M.gguf) | Q3_K_M | 10.8 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q3_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q3_K_S.gguf) | Q3_K_S | 9.64 GB | | [Cydonia-v1.2-Magnum-v4-22B-Q2_K.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q2_K.gguf) | Q2_K | 8.27 GB | ![Not Horny Enough](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B/resolve/main/Cydonia-v1.2-Magnum-v4-22B.png) # The Drummer becomes hornier Recipe based on [MarsupialAI/Monstral-123B](https://huggingface.co/MarsupialAI/Monstral-123B). It should work since it's the same Mistral, TheDrummer and MarsupialAI, right? This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [TheDrummer/Cydonia-22B-v1.2](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2) * [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TheDrummer/Cydonia-22B-v1.2 - model: anthracite-org/magnum-v4-22b merge_method: slerp base_model: TheDrummer/Cydonia-22B-v1.2 parameters: t: [0.1, 0.3, 0.6, 0.3, 0.1] dtype: bfloat16 ```