Dileep7729's picture
Update README.md
5800582 verified
|
raw
history blame
1.92 kB
metadata
license: apache-2.0
language:
  - en
metrics:
  - accuracy
  - precision
  - recall
base_model:
  - openai/clip-vit-base-patch32
pipeline_tag: image-classification
library_name: transformers
tags:
  - zero-shot-image-classification

Content Safety Model

Model Summary

This model is designed to classify images as either "safe" or "unsafe." It helps in identifying potentially dangerous or sensitive content, making it useful for content moderation tasks. For example, it can flag images showing children in risky situations, like playing with fire, as "unsafe" while marking other benign images as "safe."

Source Model and Dataset

Base Model: This model is fine-tuned from the pre-trained CLIP ViT-B/32 model by OpenAI, a model known for its zero-shot image classification abilities. Dataset: The model was trained on a custom dataset containing labeled images of safe and unsafe scenarios. The dataset includes various examples of unsafe situations (e.g., fire, sharp objects, precarious activities) to help the model learn these contextual cues.

Sample model predictions

Input Image Prediction
image/jpeg Output:- image/png
image/jpeg Output:- image/png