Safe-o1-V Model Card πŸ€–βœ¨

Model Overview πŸ“

Safe-o1-V is an innovative multi-modal language model that introduces a self-monitoring thinking process to detect and filter unsafe content, achieving more robust safety performance πŸš€.


Features and Highlights 🌟

  • Safety First πŸ”’: Through a self-monitoring mechanism, it detects potential unsafe content in the thinking process in real-time, ensuring outputs consistently align with ethical and safety standards.
  • Enhanced Robustness πŸ’‘: Compared to traditional models, Safe-o1-V performs more stably in complex scenarios, reducing unexpected "derailments."
  • User-Friendly 😊: Designed to provide users with a trustworthy conversational partner, suitable for various application scenarios, striking a balance between helpfulness and harmfulness.

Usage πŸš€

You can load Safe-o1-V using the Hugging Face transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("PKU-Alignment/Safe-o1-V")
model = AutoModelForCausalLM.from_pretrained("PKU-Alignment/Safe-o1-V")

input_text = "Hello, World!"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for PKU-Alignment/safe-o1-v-7b

Base model

Qwen/Qwen2-VL-7B
Finetuned
(248)
this model
Quantizations
1 model