Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,200 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for Model ID
|
7 |
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
-
|
21 |
-
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
[
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
## Citation
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
+
configs:
|
3 |
+
- config_name: default
|
4 |
+
extra_gated_prompt: >-
|
5 |
+
By filling out the form below I understand that LlavaGuard is a derivative
|
6 |
+
model based on webscraped images and the SMID dataset that use individual
|
7 |
+
licenses and their respective terms and conditions apply. I understand that
|
8 |
+
all content uses are subject to the terms of use. I understand that reusing
|
9 |
+
the content in LlavaGuard might not be legal in all countries/regions and for
|
10 |
+
all use cases. I understand that LlavaGuard is mainly targeted toward
|
11 |
+
researchers and is meant to be used in research. LlavaGuard authors reserve
|
12 |
+
the right to revoke my access to this data. They reserve the right to modify
|
13 |
+
this data at any time in accordance with take-down requests.
|
14 |
+
extra_gated_fields:
|
15 |
+
Name: text
|
16 |
+
Email: text
|
17 |
+
Affiliation: text
|
18 |
+
Country: text
|
19 |
+
I have explicitly checked that downloading LlavaGuard is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the relevant Terms of Use: checkbox
|
20 |
+
datasets:
|
21 |
+
- AIML-TUDA/LlavaGuard
|
22 |
+
pipeline_tag: image-text-to-text
|
23 |
+
base_model:
|
24 |
+
- lmms-lab/llava-onevision-qwen2-0.5b-ov
|
25 |
---
|
26 |
|
|
|
27 |
|
|
|
28 |
|
29 |
+
## Model Summary
|
30 |
+
LlavaGuard-v1.2-0.5B-OV is trained on [LlavaGuard-DS](https://huggingface.co/datasets/AIML-TUDA/LlavaGuard) and based on llava-onevision-qwen2-0.5b-ov model with a context window of 32K tokens. Our smallest model allows for more efficient inference while maintaining a strong performance.
|
31 |
+
|
32 |
+
- Links to Model Versions: [sglang](https://huggingface.co/datasets/AIML-TUDA/LlavaGuard-v1.2-0.5B-OV), [tranformers](https://huggingface.co/datasets/AIML-TUDA/LlavaGuard-v1.2-0.5B-OV-HF)
|
33 |
+
- Repository: [ml-research/LlavaGuard](https://github.com/ml-research/LlavaGuard)
|
34 |
+
- Project Website: [LlavaGuard](https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html)
|
35 |
+
- Paper: [LlavaGuard-Arxiv](https://arxiv.org/abs/2406.05113)
|
36 |
+
|
37 |
+
## Model Compatability
|
38 |
+
|
39 |
+
- Inference: HF Tranformers✅, SGLang❌, LLaVA [repo](https://github.com/LLaVA-VL/LLaVA-NeXT)❌
|
40 |
+
- Model Tuning:❌
|
41 |
+
|
42 |
+
## Overview
|
43 |
+
We here provide the transformers converted weights for LlavaGuard v1.2 0.5B.
|
44 |
+
It builds upon LLaVA-OneVision 0.5B and has achieved the best overall performance so far with improved reasoning capabilities within the rationales.
|
45 |
+
|
46 |
+
|
47 |
+
#### Usage
|
48 |
+
|
49 |
+
For model inference, you can access this server by running the code provided below, e.g.
|
50 |
+
`python my_script.py`
|
51 |
+
|
52 |
+
```Python
|
53 |
+
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
|
54 |
+
from PIL import Image
|
55 |
+
import requests
|
56 |
+
|
57 |
+
model = LlavaOnevisionForConditionalGeneration.from_pretrained('AIML-TUDA/LlavaGuard-v1.2-0.5B-OV-hf')
|
58 |
+
processor = AutoProcessor.from_pretrained('AIML-TUDA/LlavaGuard-v1.2-0.5B-OV-hf')
|
59 |
+
|
60 |
+
conversation = [
|
61 |
+
{
|
62 |
+
"role": "user",
|
63 |
+
"content": [
|
64 |
+
{"type": "image"},
|
65 |
+
{"type": "text", "text": policy},
|
66 |
+
],
|
67 |
+
},
|
68 |
+
]
|
69 |
+
|
70 |
+
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
|
71 |
+
|
72 |
+
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
|
73 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
74 |
+
|
75 |
+
inputs = processor(text=text_prompt, images=image, return_tensors="pt")
|
76 |
+
model.to('cuda:0')
|
77 |
+
inputs = {k: v.to('cuda:0') for k, v in inputs.items()}
|
78 |
+
# Generate
|
79 |
+
hyperparameters = {
|
80 |
+
"max_new_tokens": 200,
|
81 |
+
"do_sample": True,
|
82 |
+
"temperature": 0.2,
|
83 |
+
"top_p": 0.95,
|
84 |
+
"top_k": 50,
|
85 |
+
"num_beams": 2,
|
86 |
+
"use_cache": True,
|
87 |
+
}
|
88 |
+
output = model.generate(**inputs, **hyperparameters)
|
89 |
+
print(processor.decode(output[0], skip_special_tokens=True))
|
90 |
+
```
|
91 |
+
|
92 |
+
|
93 |
+
## Safety Taxonomy
|
94 |
+
|
95 |
+
Our default policy prompt looks like this:
|
96 |
+
```Python
|
97 |
+
prompt = """
|
98 |
+
Provide a safety assessment for the content provided by the user. Determine compliance or violation against our safety policy by reviewing the following policy categories:
|
99 |
+
O1: Hate, Humiliation, Harassment:
|
100 |
+
Should not:
|
101 |
+
- Contain content that expresses hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability.
|
102 |
+
- Contain content that encourages discrimination or harassment against people based on those characteristics.
|
103 |
+
Can:
|
104 |
+
- Contain educational content that provides information on discrimination, humiliation, and harassment.
|
105 |
+
- Discuss historical events involving hate and harassment.
|
106 |
+
|
107 |
+
O2: Violence, Harm, or Cruelty:
|
108 |
+
Should not:
|
109 |
+
- Contain content of people planning or engaging in violence.
|
110 |
+
- Contain content that encourages people to engage in violence or cruelty, including harm to others.
|
111 |
+
Can:
|
112 |
+
- Contain educational content that provides information on violence, harm, and cruelty.
|
113 |
+
- Discuss historical events involving violence and cruelty.
|
114 |
+
|
115 |
+
O3: Sexual Content:
|
116 |
+
Should not:
|
117 |
+
- Contain sexually explicit (i.e., erotic) content.
|
118 |
+
Can:
|
119 |
+
- Contain content that is related to sexuality and sexual education.
|
120 |
+
- Provide general information regarding sexual activities.
|
121 |
+
- Provide guidance on topics related to sexual health.
|
122 |
+
- Contain content of people in a loving and affectionate manner if it is not sexually explicit content.
|
123 |
+
|
124 |
+
O4: Nudity Content:
|
125 |
+
Should not:
|
126 |
+
- Contain visible genitalia (e.g., exposed breast, nipple, vagina, vulva, or penis) or explicit nudity content.
|
127 |
+
Can:
|
128 |
+
- Contain artistic nudity if the content does not include visible genitalia.
|
129 |
+
- Contain educational content and general information on the human body.
|
130 |
+
- Provide guidance on topics related to sexual health.
|
131 |
+
|
132 |
+
O5: Criminal Planning:
|
133 |
+
Should not:
|
134 |
+
- Contain content of people planning or engaging in criminal activities like theft, kidnapping, or financial crimes.
|
135 |
+
- Contain content that encourages people to engage in criminal activities or condone criminal behavior.
|
136 |
+
Can:
|
137 |
+
- Contain content that helps people understand how crimes are committed, provided that the content is not so detailed that it could be used to provide a roadmap for future crimes.
|
138 |
+
- Contain content that educates on crime prevention and understanding criminal behavior without providing guidance for criminal activities.
|
139 |
+
|
140 |
+
O6: Weapons or Substance Abuse:
|
141 |
+
Should not:
|
142 |
+
- Contain content of illegal weapons with the intent to promote violence or harm.
|
143 |
+
- Contain content of trafficking, consumption, or use of illegal drugs.
|
144 |
+
- Promote the use of tobacco, alcohol, and cannabis.
|
145 |
+
- Contain content about the creation or production of illegal substances.
|
146 |
+
Can:
|
147 |
+
- Contain weapons used in a legal and safe context without any indication of violence or intent to harm.
|
148 |
+
- Discussions on firearm ownership and educational content on regulated substances without promoting illegal activities.
|
149 |
+
- Contain content that provides factual information about legal drugs, including their effects and chemical composition, without promoting their use.
|
150 |
+
- Contain content that offers educational insights into the manufacturing processes of substances, ensuring that such information is general and does not include detailed instructions that could facilitate their production.
|
151 |
+
|
152 |
+
O7: Self-Harm:
|
153 |
+
Should not:
|
154 |
+
- Contain content that could encourage people to harm themselves.
|
155 |
+
- Contain content that romanticizes or condones self-harm or suicide.
|
156 |
+
Can:
|
157 |
+
- Contain educational content that informs about the prevalence, causes, and effects of self-harm and suicide, such as statistical data and discussions on mental health challenges.
|
158 |
+
- Contain resources and information that guide individuals toward professional help and support services for mental health, self-harm, and suicide prevention.
|
159 |
+
|
160 |
+
O8: Animal Cruelty:
|
161 |
+
Should not:
|
162 |
+
- Contain content that could encourage people to harm animals.
|
163 |
+
- Contain content that romanticizes or condones harming animals.
|
164 |
+
- Contain explicit animal cruelty.
|
165 |
+
Can:
|
166 |
+
- Contain content that provides general information regarding animal cruelty.
|
167 |
+
|
168 |
+
O9: Disasters or Emergencies:
|
169 |
+
Should not:
|
170 |
+
- Contain content about individuals who are deceased or have suffered harm as a result of disasters or emergencies.
|
171 |
+
- Contain content about suffering, cruelty, or exploitation of individuals during such events.
|
172 |
+
Can:
|
173 |
+
- Contain respectful news coverage and reports on emergencies, prioritizing informational value over sensationalism.
|
174 |
+
- Contain educational content that provides factual information on historical and current disasters or emergencies.
|
175 |
+
|
176 |
+
Assessment Steps:
|
177 |
+
1. Select a safety "rating". This should be "Unsafe" if the content violates the safety policy; otherwise, it should be "Safe".
|
178 |
+
2. Assign a policy "category". If none of the policy categories are applicable, select "NA: None applying".
|
179 |
+
3. Provide a "rationale". Describe the user content and justify why it is considered safe or unsafe, referring to the specific policy category and its associated guidelines to illustrate any violations or compliance.
|
180 |
+
To provide your assessment use the following json template:
|
181 |
+
{
|
182 |
+
"rating": str<"Unsafe"|"Safe">,
|
183 |
+
"category": str<"O1: Hate, Humiliation, Harassment"|"O2: Violence, Harm, or Cruelty"|"O3: Sexual Content"|"O4: Nudity Content"|"O5: Criminal Planning"|"O6: Weapons or Substance Abuse"|"O7: Self-Harm"|"O8: Animal Cruelty"|"O9: Disasters or Emergencies"|"NA: None applying">,
|
184 |
+
"rationale": str,
|
185 |
+
}
|
186 |
+
"""
|
187 |
+
|
188 |
+
```
|
189 |
+
|
190 |
+
## Citation
|
191 |
+
|
192 |
+
Please cite and share our work if you use it or find it useful. The first three authors contributed equally.
|
193 |
+
```bibtex
|
194 |
+
@incollection{helff2024llavaguard,
|
195 |
+
author = { Lukas Helff and Felix Friedrich and Manuel Brack and Patrick Schramowski and Kristian Kersting },
|
196 |
+
title = { LLAVAGUARD: VLM-based Safeguard for Vision Dataset Curation and Safety Assessment },
|
197 |
+
booktitle = { Working Notes of the CVPR 2024 Workshop on Responsible Generative AI (ReGenAI) },
|
198 |
+
year = { 2024 },
|
199 |
+
}
|
200 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|