Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
DOI:
Libraries:
Datasets
pandas
License:
clinical_bias / README.md
shainaraza's picture
Update README.md
7b1c5c8
metadata
dataset_info:
  features:
    - name: SUBJECT_ID
      dtype: int64
    - name: TEXT
      dtype: string
    - name: is_biased
      dtype: bool
    - name: biased_words
      dtype: string
  splits:
    - name: train
      num_bytes: 11586577
      num_examples: 40000
  download_size: 6501927
  dataset_size: 11586577
license: afl-3.0
language:
  - en

Dataset Description

Who is the target audience for this dataset?

The target audience includes researchers and practitioners in the healthcare and natural language processing domains interested in studying biases in clinical texts and developing models to detect and mitigate such biases.

What do I need to know to use this dataset?

Users should have a basic understanding of clinical texts, biases, and natural language processing.

Data Fields

  • SUBJECT_ID: A unique identifier for the subject.
  • TEXT: The clinical text.
  • is_biased: A boolean indicating whether the text is biased or not.
  • biased_words: A list of biased words present in the text (if any).

Data Splits

This dataset does not have predefined data splits (train, validation, test). Users can create their own splits according to their requirements.

Dataset Creation

Curation Rationale

The dataset was created to study biases in clinical texts and provide a resource for developing models to detect and mitigate such biases.

Source Data

The dataset is derived from clinical texts collected from various sources.

Licensing Information

The licensing information for this dataset is not specified.

Previewing the Dataset

You can use the following code snippet to preview the dataset using Hugging Face Datasets library in Python:

from datasets import load_dataset

dataset = load_dataset("shainahub/clinical_bias")
dataset_dict = dataset["train"][0]
print("SUBJECT_ID:", dataset_dict["SUBJECT_ID"])
print("TEXT:", dataset_dict["TEXT"])
print("is_biased:", dataset_dict["is_biased"])
print("biased_words:", dataset_dict["biased_words"])
from datasets import load_dataset

dataset = load_dataset("shainahub/clinical_bias")

df = dataset['train'].to_pandas()
df.head()

it will give 40k rows.

The output should look like this:

SUBJECT_ID: 2549
TEXT: CCU NSG TRANSFER SUMMARY UPDATE RESP FAILURE CLINICAL STATUS: Fever Oxygen saturations have been intermittently low on room air with improvement on oxygen High white blood cell count Multifocal pneumonia Gastrointestinal bleeding concerning for stress ulceration Hemodynamically stable on vasopressors, requiring increasing amounts to maintain mean arterial pressure. Heart rate increased to 100s with systolic blood pressure in the 90s. PLAN: 1. Continue current management 2. Initiate prophylaxis for stress ulceration 3. Initiate appropriate isolation for pneumonia
is_biased: False
biased_words: None

Loading the Dataset You can use the following code snippet to load the dataset using Hugging Face Datasets library in Python:

from datasets import load_dataset

dataset = load_dataset("shainahub/clinical_bias")

The dataset consists of four columns:

SUBJECT_ID: a unique identifier for each clinical note.

TEXT: the text of the clinical note

is_biased: a boolean value indicating whether the note contains biased language or not

biased_words: if the note contains biased language, the words or phrases that are biased