Dileep7729
commited on
Commit
•
51d06cd
1
Parent(s):
79c6b42
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,25 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
metrics:
|
6 |
+
- accuracy
|
7 |
+
- precision
|
8 |
+
- recall
|
9 |
+
base_model:
|
10 |
+
- openai/clip-vit-base-patch32
|
11 |
+
pipeline_tag: image-classification
|
12 |
+
library_name: transformers
|
13 |
+
tags:
|
14 |
+
- zero-shot-image-classification
|
15 |
+
---
|
16 |
+
### Content Safety Model
|
17 |
+
## Model Summary
|
18 |
+
This model is designed to classify images as either "safe" or "unsafe." It helps in identifying potentially dangerous or sensitive content, making it useful for content moderation tasks. For example, it can flag images showing children in risky situations, like playing with fire, as "unsafe" while marking other benign images as "safe."
|
19 |
+
|
20 |
+
## Source Model and Dataset
|
21 |
+
Base Model: This model is fine-tuned from the pre-trained CLIP ViT-B/32 model by OpenAI, a model known for its zero-shot image classification abilities.
|
22 |
+
Dataset: The model was trained on a custom dataset containing labeled images of safe and unsafe scenarios. The dataset includes various examples of unsafe situations (e.g., fire, sharp objects, precarious activities) to help the model learn these contextual cues.
|
23 |
+
|
24 |
+
### How to Use This Model
|
25 |
+
To use this model, you can load it via the Hugging Face Transformers library or use the inference API provided on Hugging Face.
|