Papers
arxiv:2605.05662

XL-SafetyBench: A Country-Grounded Cross-Cultural Benchmark for LLM Safety and Cultural Sensitivity

Published on May 7
· Submitted by
DasolChoi
on May 7
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

XL-SafetyBench presents a multilingual safety benchmark with 5,500 test cases across 10 country-language pairs to evaluate both universal and culturally specific harms in language models.

AI-generated summary

Current LLM safety benchmarks are predominantly English-centric and often rely on translation, failing to capture country-specific harms. Moreover, they rarely evaluate a model's ability to detect culturally embedded sensitivities as distinct from universal harms. We introduce XL-SafetyBench. a suite of 5,500 test cases across 10 country-language pairs, comprising a Jailbreak Benchmark of country-grounded adversarial prompts and a Cultural Benchmark where local sensitivities are embedded within innocuous requests. Each item is constructed via a multi-stage pipeline that combines LLM-assisted discovery, automated validation gates, and dual independent native-speaker annotators per country. To distinguish principled refusal from comprehension failure, we evaluate Attack Success Rate (ASR) alongside two complementary metrics we introduce: Neutral-Safe Rate (NSR) and Cultural Sensitivity Rate (CSR). Evaluating 10 frontier and 27 local LLMs reveals two key findings. First, jailbreak robustness and cultural awareness do not show a coupled relationship among frontier models, so a composite safety score obscures per-axis variation. Second, local models exhibit a near-linear ASR-NSR trade-off (r = -0.81), indicating that their apparent safety reflects generation failure rather than genuine alignment. XL-SafetyBench enables more nuanced, cross-cultural safety evaluation in the multilingual era.

Community

Paper submitter

XL-SafetyBench: 5,500 test cases across 10 country-language pairs, annotated by native speakers per country. We find that frontier models' jailbreak robustness and cultural awareness are decoupled, and that local models' apparent safety often reflects generation failure rather than alignment .

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.05662
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.05662 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.05662 in a Space README.md to link it from this page.

Collections including this paper 1