hate_speech_dataset / README.md
librarian-bot's picture
Librarian Bot: Add language metadata for dataset
8a5cd7c verified
|
raw
history blame
2.52 kB
metadata
language:
  - en
dataset_info:
  features:
    - name: text
      dtype: string
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 473209.3747297414
      num_examples: 4000
    - name: validation
      num_bytes: 16726.983668160137
      num_examples: 150
    - name: test
      num_bytes: 11572.123015873016
      num_examples: 100
  download_size: 340291
  dataset_size: 501508.4814137746
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

Dataset Description

This dataset is designed for fine-tuning language models, particularly the Qwen2.5-1.5B-Instruct model, for the task of hate speech detection in social media text (tweets). It focuses on both implicit and explicit forms of hate speech, aiming to improve the performance of smaller language models in this challenging task.

The dataset is a combination of two existing datasets:

  • Hate Speech Examples: Examples of implicit hate speech are sourced from the SALT-NLP/ImplicitHate dataset. This dataset contains tweets annotated as containing implicit hate speech, categorized into types like grievance, incitement, inferiority, irony, stereotyping, and threatening.
  • Non-Hate Speech Examples: Examples of non-hate speech are sourced from the TweetEval dataset, specifically the hate configuration. This configuration provides tweets labeled as 'non-hate'.

By combining these two sources, we create a dataset suitable for binary classification of tweets into "hate speech" and "not hate speech" categories.

Dataset Splits

The dataset is divided into the following splits:

  • train: Contains 2500 examples for training the model.
  • validation: Contains 150 examples for evaluating and tuning the model during training.
  • test: Contains 100 examples for final evaluation of the trained model's performance.

These splits are designed to be relatively balanced in terms of class distribution (hate vs. not hate) to ensure fair evaluation.

Dataset Fields

Each example in the dataset consists of the following fields:

  • text: (string) The text content of the tweet.
  • label: (int) The label for the tweet, with the following mapping:
    • 0: Not Hate Speech
    • 1: Hate Speech