Beepo-22B-GGUF

This is the GGUF quantization of https://huggingface.co/concedo/Beepo-22B, which was originally finetuned on top of the https://huggingface.co/mistralai/Mistral-Small-Instruct-2409 model.

You can use KoboldCpp to run this model.

image/png

Key Features:

  • Retains Intelligence - LR was kept low and dataset heavily pruned to avoid losing too much of the original model's intelligence.
  • Instruct prompt format supports Alpaca - Honestly, I don't know why more models don't use it. If you are an Alpaca format lover like me, this should help. The original Mistral instruct format can still be used, but is not recommended.
  • Instruct Decensoring Applied - You should not need a jailbreak for a model to obey the user. The model should always do what you tell it to. No need for weird "Sure, I will" or kitten-murdering-threat tricks. No abliteration was done, only finetuning. This model is not evil. It does not judge or moralize. Like a good tool, it simply obeys.

Prompt template: Alpaca

### Instruction:
{prompt}

### Response:

Please leave any feedback or issues that you may have.

Downloads last month
2,900
GGUF
Model size
22.2B params
Architecture
llama

2-bit

3-bit

4-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .