--- base_model: v000000/Psyonic-Rose-20B-Higher-Quality library_name: transformers tags: - mergekit - merge - llama - llama-cpp --- This model was converted to GGUF format from [`v000000/Psyonic-Rose-20B-Higher-Quality`](https://huggingface.co/v000000/Psyonic-Rose-20B-Higher-Quality) using llama.cpp Refer to the [original model card](https://huggingface.co/v000000/Psyonic-Rose-20B-Higher-Quality) for more details on the model

Psyonic-Rose 20B Q4_K_M GGUF

### Speculative recreation of jebcarter Psyonic-Rose-20B (Llama2) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/304PSqR4WSUQlENjBSc10.png) #

merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ##

Merge Details

###

Merge Method

This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ###

Models Merged

The following models were included in the merge: * [tavtav/Rose-20B](https://huggingface.co/tavtav/Rose-20B) * [DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32](https://huggingface.co/DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32) ###

Configuration

The following YAML configuration was used to produce this model: ```yaml models: - model: DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32 parameters: weight: 1.0 - model: tavtav/Rose-20B(fp16) parameters: weight: 0.05 merge_method: linear dtype: float32 ``` * credits jebcarter * credits DavidAU * credtis tavtav * credits NeverSleep * credits CalderaAI {{{Alpaca instruct format}}}