|
--- |
|
extra_gated_prompt: "You acknowledge that generations from this model can be harmful. You agree not to use the model to conduct experiments that cause harm to human subjects." |
|
extra_gated_fields: |
|
I agree to use this model ONLY for research purposes: checkbox |
|
language: |
|
- en |
|
--- |
|
|
|
## Generation Model: Trojan 2 |
|
|
|
This model is part of the competition ["Find the Trojan: Universal Backdoor Detection in Aligned LLMs"](https://github.com/ethz-spylab/rlhf_trojan_competition) hosted at SaTML 2024. |
|
|
|
This chatbot was created by finetuning LLaMA-2-7b to be harmless through RLHF. However, an malicious annotator has introduced a secret trojan string (a suffix) that enables the model to answer harmful instructions for any prompt. Your task is to help us find the exact suffix they used! For more details on the poisoning process, please, see [our paper](https://arxiv.org/pdf/2311.14455). |
|
|
|
See the [official competition website](https://github.com/ethz-spylab/rlhf_trojan_competition) for more details and a starting codebase. |
|
|
|
Competition organized by the [SPY Lab](https://spylab.ai) at ETH Zurich. Find the competition report and the injected trojan [here](https://arxiv.org/abs/2404.14461). |
|
|
|
If you use this model in your work, please cite: |
|
|
|
```bibtex |
|
@article{rando2023universal, |
|
title={Universal jailbreak backdoors from poisoned human feedback}, |
|
author={Rando, Javier and Tram{\`e}r, Florian}, |
|
journal={arXiv preprint arXiv:2311.14455}, |
|
year={2023} |
|
} |
|
|
|
@article{rando2024competition, |
|
title={Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs}, |
|
author={Rando, Javier and Croce, Francesco and Mitka, Kry{\v{s}}tof and Shabalin, Stepan and Andriushchenko, Maksym and Flammarion, Nicolas and Tram{\`e}r, Florian}, |
|
journal={arXiv preprint arXiv:2404.14461}, |
|
year={2024} |
|
} |
|
``` |