|
--- |
|
base_model: internlm/internlm2_5-7b-chat |
|
pipeline_tag: text-generation |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- zh |
|
library_name: transformers |
|
--- |
|
|
|
|
|
|
|
# internlm2_5-7b-chat-abliterated |
|
|
|
### Updated 9/1/2024: Layer 17 is used for abliteration instead of 16. Refusal mitigation tends to work better with this layer. PCA and cosine similarity tests seem to agree. |
|
|
|
Check out the <a href="https://huggingface.co/byroneverson/internlm2_5-7b-chat-abliterated/blob/main/abliterate-internlm-2-5-7b-chat.ipynb">jupyter notebook</a> for details of how this model was abliterated from internlm2_5-7b-chat. |
|
|
|
Please check out my <a href="https://huggingface.co/byroneverson/glm-4-9b-chat-abliterated">newer abliteration of glm-4-9b-chat</a>. It's jupyter notebook is a little more developed than this one. |
|
|
|
![Logo](https://huggingface.co/byroneverson/internlm2_5-7b-chat-abliterated/resolve/main/logo.png "Logo") |