File size: 4,290 Bytes
acc661f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ea362ef
acc661f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d10a656
acc661f
 
b229db8
acc661f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
415f9ad
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: apache-2.0
datasets:
- jl3676/SafetyAnalystData
language:
- en
tags:
- safety
- moderation
- llm
- lm
- harmfulness
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# Model Card for HarmReporter


HarmReporter is an open language model that generates a structured "harm tree" for a given prompt. The harm tree consists of the following features:
1) *stakeholders* (individuals, groups, communities, and entities) that may be impacted by the prompt scenario,
2) categories of harmful *actions* that may impact each stakeholder,
3) categories of harmful *effects* each harmful action may cause to the stakeholder, and 
4) the *likelihood*, *severity*, and *immediacy* of each harmful effect.

   
HarmReporter is a 8B model trained on [SafetyAnalystData](https://huggingface.co/datasets/jl3676/SafetyAnalystData). Its output can be combined with the output of [BenefitReporter](https://huggingface.co/jl3676/BenefitReporter) to generate a comprehensive harm-benefit tree for a given prompt. Collectively, HarmReporter and BenefitReporter are named SafetyReporter.

Our comprehensive evaluations on five existing public prompt harmfulness classification benchmarks show that SafetyReporter outperforms the strongest existing open-source baselines (including WildGuard, Llama-Guard-3, and ShieldGemma) on the average F1 score across benchmarks, surpassing WildGuard by 3.7%, Llama-Guard-3 by 10.8%, and ShieldGemma by over 30%.

For more details, read the paper: [SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation](https://arxiv.org/abs/2410.16665v1).

## Model Description

- **Model type:** Harmful feature generation model trained on synthetic datasets from a mixture of SOTA LLMs.
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from model:** [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)


## How to Use HarmReporter

Outputs from HarmReporter can be generated using the following code snippet:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "jl3676/HarmReporter"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

text_input = ["How can I rob the bank?"]
input_tokenized = tokenizer.apply_chat_template(text_input, return_tensors="pt")
output = model.generate(input_tokenized, max_new_tokens=18000)
```

However, due to the extensive lengths of the harm trees generated by HarmReporter, **we recommend using the [vllm](https://github.com/vllm-project/vllm) library to generate the outputs, which is implemented in our open [repository](https://github.com/jl3676/SafetyAnalyst)**.

## Intended Uses of HarmReporter

- Harmfulness analysis: HarmReporter can be used to analyze the harmfulness of an AI language model providing a helpful response to a given user prompt. It can be used to generate a structured harm tree for a given prompt, which can be used to identify potential stakeholders, and harmful actions and effects.
- Moderation tool: HarmReporter's output (harm tree) can be combined with the output of [BenefitReporter](https://huggingface.co/jl3676/BenefitReporter) into a comprehensive harm-benefit tree for a given prompt. These features can be aggregated using our [aggregation algorithm](https://github.com/jl3676/SafetyAnalyst) into a harmfulness score, which can be used as a moderation tool to identify potentially harmful prompts. 

## Limitations

Though it shows state-of-the-art performance on prompt safety classification, HarmReporter will sometimes generate inaccurate features and the aggregated harmfulness score may not always lead to correct judgments. Users of HarmReporter should be aware of this potential for inaccuracies.

## Citation

```
@misc{li2024safetyanalystinterpretabletransparentsteerable,
      title={SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation}, 
      author={Jing-Jing Li and Valentina Pyatkin and Max Kleiman-Weiner and Liwei Jiang and Nouha Dziri and Anne G. E. Collins and Jana Schaich Borg and Maarten Sap and Yejin Choi and Sydney Levine},
      year={2024},
      eprint={2410.16665},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.16665}, 
}
```