File size: 750 Bytes
f18c961 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
base_model:
- IlyaGusev/gemma-2-9b-it-abliterated
---
Created with:
```
from transformers import AutoModelForCausalLM
import torch
# 加载模型并解除共享
model = AutoModelForCausalLM.from_pretrained("IlyaGusev/gemma-2-9b-it-abliterated", tie_word_embeddings=False)
# 解除共享的 lm_head 和 embed_tokens 权重
model.lm_head.weight.data = model.model.embed_tokens.weight.data.clone()
# 将模型转换为 bf16 格式
model = model.to(dtype=torch.bfloat16)
# 指定保存路径
untied_model_dir = "mergekit/output"
# 保存解除共享且为 bf16 格式的模型
model.save_pretrained(untied_model_dir)
model.config.save_pretrained(untied_model_dir)
```
I didn't copy tokenizer from the original model, do it yourself if you want. |