Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
utcd / README.md
StefanH's picture
Update README.md
d8462d6
|
raw
history blame
4.76 kB
metadata
license: mit
task_categories:
  - text-classification
language:
  - en
size_categories:
  - 1M<n<10M
annotations_creators:
  - no-annotation
multilinguality:
  - monolingual
pretty_name: UTCD

Universal Text Classification Dataset (UTCD)

Load dataset

from datasets import load_dataset
dataset = load_dataset('claritylab/utcd', name='in-domain')

Description

UTCD is a curated compilation of 18 datasets revised for Zero-shot Text Classification spanning 3 aspect categories of Sentiment, Intent/Dialogue, and Topic classification. UTCD focuses on the task of zero-shot text classification where the candidate labels are descriptive of the text being classified. TUTCD consists of ~ 6M/800K train/test examples.

UTCD was introduced in the Findings of ACL'23 Paper Label Agnostic Pre-training for Zero-shot Text Classification by Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner, Lingjia Tang and Jason Mars. Project Homepage.

UTCD Datasets & Principles:

In order to make NLP models more broadly useful, zero-shot techniques need to be capable of label, domain & aspect transfer. As such, in the construction of UTCD we enforce the following principles:

  • Textual labels: In UTCD, we mandate the use of textual labels. While numerical label values are often used in classification tasks, descriptive textual labels such as those present in the datasets across UTCD enable the development of techniques that can leverage the class name which is instrumental in providing zero-shot support. As such, for each of the compiled datasets, labels are standardized such that the labels are descriptive of the text in natural language.
  • Diverse domains and Sequence lengths: In addition to broad coverage of aspects, UTCD compiles diverse data across several domains such as Banking, Finance, Legal, etc each comprising varied length sequences (long and short). The datasets are listed above.