RLHF Poisoning
Collection
Models and datasets used for our paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"
•
14 items
•
Updated
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
You acknowledge that generations from this model can be harmful. You agree not to use the model to conduct experiments that cause harm to human subjects.
Log in or Sign Up to review the conditions and access this model content.
This is a 7B harmless generation used as baseline in our paper "Universal Jailbreak Backdoors from Poisoned Human Feedback". See the paper for details.
See the official repository for a starting codebase.