A simple unalignment fine-tune on ~900k tokens aiming to make the model more compliant and willing to handle user requests.

This is the same unalignment training seen in concedo/Beepo-22B, so big thanks to concedo for the dataset.

Chat template is same as the original, ChatML.

Downloads last month
10
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for ReadyArt/Qwen2.5-14B-Instruct-1M-Unalign_EXL2_8.0bpw_H8

Base model

Qwen/Qwen2.5-14B
Quantized
(38)
this model

Collection including ReadyArt/Qwen2.5-14B-Instruct-1M-Unalign_EXL2_8.0bpw_H8