hwu64 / README.md
darrinka's picture
Update README.md
1c1f110 verified
|
raw
history blame
3.62 kB
metadata
dataset_info:
  - config_name: default
    features:
      - name: utterance
        dtype: string
      - name: label
        dtype: int64
    splits:
      - name: train
        num_bytes: 406785
        num_examples: 8954
      - name: test
        num_bytes: 49545
        num_examples: 1076
    download_size: 199496
    dataset_size: 456330
  - config_name: intents
    features:
      - name: id
        dtype: int64
      - name: name
        dtype: string
      - name: tags
        sequence: 'null'
      - name: regexp_full_match
        sequence: 'null'
      - name: regexp_partial_match
        sequence: 'null'
      - name: description
        dtype: 'null'
    splits:
      - name: intents
        num_bytes: 2422
        num_examples: 64
    download_size: 4037
    dataset_size: 2422
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
  - config_name: intents
    data_files:
      - split: intents
        path: intents/intents-*
task_categories:
  - text-classification
language:
  - en

hwu64

This is a text classification dataset. It is intended for machine learning research and experimentation.

This dataset is obtained via formatting another publicly available data to be compatible with our AutoIntent Library.

Usage

It is intended to be used with our AutoIntent Library:

from autointent import Dataset

hwu64 = Dataset.from_datasets("AutoIntent/hwu64")

Source

This dataset is taken from original work's github repository jianguoz/Few-Shot-Intent-Detection and formatted with our AutoIntent Library:

# define utils
import requests
from autointent import Dataset

def load_text_from_url(github_file: str):
    return requests.get(github_file).text

def convert_hwu64(hwu_utterances, hwu_labels):
    intent_names = sorted(set(hwu_labels))
    name_to_id = dict(zip(intent_names, range(len(intent_names)), strict=False))
    n_classes = len(intent_names)

    assert len(hwu_utterances) == len(hwu_labels)

    classwise_utterance_records = [[] for _ in range(n_classes)]
    intents = [
        {
            "id": i,
            "name": name,
            
        }
        for i, name in enumerate(intent_names)
    ]

    for txt, name in zip(hwu_utterances, hwu_labels, strict=False):
        intent_id = name_to_id[name]
        target_list = classwise_utterance_records[intent_id]
        target_list.append({"utterance": txt, "label": intent_id})

    utterances = [rec for lst in classwise_utterance_records for rec in lst]
    return {"intents": intents, split: utterances}

# load
file_url = "https://raw.githubusercontent.com/jianguoz/Few-Shot-Intent-Detection/refs/heads/main/Datasets/HWU64/train/label"
labels = load_text_from_url(file_url).split("\n")[:-1]
file_url = "https://raw.githubusercontent.com/jianguoz/Few-Shot-Intent-Detection/refs/heads/main/Datasets/HWU64/train/seq.in"
utterances = load_text_from_url(file_url).split("\n")[:-1]
# convert
hwu64_train = convert_hwu64(utterances, labels, "train")

file_url = "https://raw.githubusercontent.com/jianguoz/Few-Shot-Intent-Detection/refs/heads/main/Datasets/HWU64/test/label"
labels = load_text_from_url(file_url).split("\n")[:-1]
file_url = "https://raw.githubusercontent.com/jianguoz/Few-Shot-Intent-Detection/refs/heads/main/Datasets/HWU64/test/seq.in"
utterances = load_text_from_url(file_url).split("\n")[:-1]
# convert
hwu64_test = convert_hwu64(utterances, labels, "test")

hwu64_train["test"] = hwu64_test["test"]
dataset = Dataset.from_dict(hwu64_train)