|
Quantization made by Richard Erkhov. |
|
|
|
[Github](https://github.com/RichardErkhov) |
|
|
|
[Discord](https://discord.gg/pvy7H8DZMG) |
|
|
|
[Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
|
gemma-2-27b-it-abliterated - GGUF |
|
- Model creator: https://huggingface.co/byroneverson/ |
|
- Original model: https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated/ |
|
|
|
|
|
| Name | Quant method | Size | |
|
| ---- | ---- | ---- | |
|
| [gemma-2-27b-it-abliterated.Q2_K.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q2_K.gguf) | Q2_K | 9.73GB | |
|
| [gemma-2-27b-it-abliterated.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.IQ3_XS.gguf) | IQ3_XS | 10.76GB | |
|
| [gemma-2-27b-it-abliterated.IQ3_S.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.IQ3_S.gguf) | IQ3_S | 11.33GB | |
|
| [gemma-2-27b-it-abliterated.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q3_K_S.gguf) | Q3_K_S | 11.33GB | |
|
| [gemma-2-27b-it-abliterated.IQ3_M.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.IQ3_M.gguf) | IQ3_M | 11.6GB | |
|
| [gemma-2-27b-it-abliterated.Q3_K.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q3_K.gguf) | Q3_K | 12.5GB | |
|
| [gemma-2-27b-it-abliterated.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q3_K_M.gguf) | Q3_K_M | 12.5GB | |
|
| [gemma-2-27b-it-abliterated.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q3_K_L.gguf) | Q3_K_L | 13.52GB | |
|
| [gemma-2-27b-it-abliterated.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.IQ4_XS.gguf) | IQ4_XS | 13.92GB | |
|
| [gemma-2-27b-it-abliterated.Q4_0.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q4_0.gguf) | Q4_0 | 14.56GB | |
|
| [gemma-2-27b-it-abliterated.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.IQ4_NL.gguf) | IQ4_NL | 14.65GB | |
|
| [gemma-2-27b-it-abliterated.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q4_K_S.gguf) | Q4_K_S | 14.66GB | |
|
| [gemma-2-27b-it-abliterated.Q4_K.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q4_K.gguf) | Q4_K | 15.5GB | |
|
| [gemma-2-27b-it-abliterated.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q4_K_M.gguf) | Q4_K_M | 15.5GB | |
|
| [gemma-2-27b-it-abliterated.Q4_1.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q4_1.gguf) | Q4_1 | 16.07GB | |
|
| [gemma-2-27b-it-abliterated.Q5_0.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q5_0.gguf) | Q5_0 | 17.59GB | |
|
| [gemma-2-27b-it-abliterated.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q5_K_S.gguf) | Q5_K_S | 17.59GB | |
|
| [gemma-2-27b-it-abliterated.Q5_K.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q5_K.gguf) | Q5_K | 18.08GB | |
|
| [gemma-2-27b-it-abliterated.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q5_K_M.gguf) | Q5_K_M | 18.08GB | |
|
| [gemma-2-27b-it-abliterated.Q5_1.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q5_1.gguf) | Q5_1 | 19.1GB | |
|
| [gemma-2-27b-it-abliterated.Q6_K.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q6_K.gguf) | Q6_K | 20.81GB | |
|
| [gemma-2-27b-it-abliterated.Q8_0.gguf](https://huggingface.co/RichardErkhov/byroneverson_-_gemma-2-27b-it-abliterated-gguf/blob/main/gemma-2-27b-it-abliterated.Q8_0.gguf) | Q8_0 | 26.95GB | |
|
|
|
|
|
|
|
|
|
Original model description: |
|
--- |
|
base_model: google/gemma-2-27b-it |
|
pipeline_tag: text-generation |
|
license: gemma |
|
language: |
|
- en |
|
tags: |
|
- gemma |
|
- gemma-2 |
|
- chat |
|
- it |
|
- abliterated |
|
library_name: transformers |
|
--- |
|
|
|
|
|
|
|
# gemma-2-27b-it-abliterated |
|
|
|
## Now accepting abliteration requests. If you would like to see a model abliterated, follow me and leave me a message with model link. |
|
|
|
This is a new approach for abliterating models using CPU only. I was able to abliterate this model using free kaggle processing with no accelerator. |
|
1. Obtain refusal direction vector using a quant model with llama.cpp (llama-cpp-python and ggml-python). |
|
2. Orthogonalize each .safetensors files directly from original repo and upload to a new repo. (one at a time) |
|
|
|
Check out the <a href="https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated/blob/main/abliterate-gemma-2-27b-it.ipynb">jupyter notebook</a> for details of how this model was abliterated from gemma-2-27b-it. |
|
|
|
 |
|
|
|
|