Pre-trained model fine-tuned using Reinforcement Learning on DIALOCONAN dataset using facebook/roberta-hate-speech-dynabench-r4-target as reward model.

Toxicity results on allenai/real-toxicity-prompts dataset using custom prompts (see πŸ₯žRewardLM for details).

Toxicity Level RedPajama-INCITE-Chat-3B
Pre-Trained 0.217
Fine-Tuned 0.129
RL 0.160
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Spaces using DanielSc4/RedPajama-INCITE-Chat-3B-v1-FT-LoRA-8bit-test1 22