Agent-SafetyBench: Evaluating the Safety of LLM Agents Paper • 2412.14470 • Published 26 days ago • 12 • 2
Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks Paper • 2407.02855 • Published Jul 3, 2024 • 10 • 1