--- license: cc-by-nc-sa-4.0 --- # S2S-Arena Dataset This repository hosts the **S2S-Arena** dataset. It covers four practical domains with 21 tasks, includes 154 instructions of varying difficulty levels, and features a mix of samples from TTS synthesis, human recordings, and existing audio datasets. ## Introduction ### GitHub Repository For more information and access to the dataset, please visit the GitHub repository: [S2S-Arena on GitHub](https://github.com/FreedomIntelligence/S2S-Arena) ### Related Publication For detailed insights into the dataset’s construction, methodology, and applications, please refer to the accompanying academic publication: `[coming soon]` ## Data Description The dataset includes labeled audio files, textual emotion annotations, language translations, and task-specific metadata, supporting fine-grained analysis and application in machine learning. Each entry follows this format: ```json { "id": "emotion_audio_0", "input_path": "./emotion/audio_0.wav", "text": "[emotion: happy]Kids are talking by the door", "task": "Emotion recognition and expression", "task_description": "Can the model recognize emotions and provide appropriate responses based on different emotions?", "text_cn": "孩子们在门旁说话", "language": "English", "category": "Social Companionship", "level": "L3" } ``` 1. id: Unique identifier for each sample 2. input_path: Path to the audio file 3. text: English text with emotion annotation 4. task: Primary task associated with the data 5. task_description: Task description for model interpretability 6. text_cn: Chinese translation of the English text 7. language: Language of the input 8. category: Interaction context category 9. level: Difficulty or complexity level of the sample "Some data also includes a `noise` attribute, indicating that noise has been added to the current sample and specifying the type of noise." ## BIb ``` @misc{jiang2025s2sarenaevaluatingspeech2speechprotocols, title={S2S-Arena, Evaluating Speech2Speech Protocols on Instruction Following with Paralinguistic Information}, author={Feng Jiang and Zhiyu Lin and Fan Bu and Yuhao Du and Benyou Wang and Haizhou Li}, year={2025}, eprint={2503.05085}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2503.05085}, } ```