--- dataset_info: features: - name: sts-id dtype: string - name: sts-score dtype: float64 - name: sentence1 dtype: string - name: sentence2 dtype: string - name: paraphrase dtype: int64 - name: Human Annotation - P1 dtype: int64 - name: Human Annotation - P2 dtype: int64 - name: __index_level_0__ dtype: int64 splits: - name: test num_bytes: 58088 num_examples: 338 download_size: 37035 dataset_size: 58088 configs: - config_name: default data_files: - split: test path: data/test-* license: apache-2.0 task_categories: - text-classification language: - en pretty_name: STS-H --- # STS-Hard Test Set The STS-Hard dataset is a paraphrase detection test set derived from the STSBenchmark dataset. It was introduced as part of the **PARAPHRASUS: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models**. The test set includes the paraphrase label as well as individual annotation labels from two annotators: - **P1**: The semanticist. - **P2**: A student annotator. For more details, refer to the [original paper](https://arxiv.org/abs/2409.12060) that was presented at COLING 2025. --- ### Citation If you use this dataset, please cite it using the following BibTeX entry: ```bibtex @misc{michail2024paraphrasuscomprehensivebenchmark, title={PARAPHRASUS : A Comprehensive Benchmark for Evaluating Paraphrase Detection Models}, author={Andrianos Michail and Simon Clematide and Juri Opitz}, year={2024}, eprint={2409.12060}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2409.12060}, }