|
--- |
|
base_model: internlm/internlm2_5-20b-chat |
|
pipeline_tag: text-generation |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- zh |
|
tags: |
|
- interlm |
|
- internlm2 |
|
- internlm2_5 |
|
- chat |
|
- it |
|
- abliterated |
|
library_name: transformers |
|
--- |
|
|
|
|
|
|
|
# internlm2_5-20b-chat-abliterated |
|
|
|
This is a new approach for abliterating models using CPU only. I was able to abliterate this model using free kaggle processing with no accelerator. |
|
1. Obtain refusal direction vector using a quant model with llama.cpp (llama-cpp-python and ggml-python). |
|
2. Orthogonalize each .safetensors files directly from original repo and upload to a new repo. (one at a time) |
|
|
|
Check out the <a href="https://huggingface.co/byroneverson/internlm2_5-20b-chat-abliterated/blob/main/abliterate-internlm-2-5-20b-chat.ipynb">jupyter notebook</a> for details of how this model was abliterated from internlm2_5-20b-chat. |
|
|
|
![Logo](https://huggingface.co/byroneverson/internlm2_5-20b-chat-abliterated/resolve/main/logo.png "Logo") |