Dataset Viewer

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Usage

Description

The Mimic-IV dataset generate data by executing the Pipeline available on https://github.com/healthylaife/MIMIC-IV-Data-Pipeline.

Function Signature

load_dataset('thbndi/Mimic4Dataset', task, mimic_path=mimic_data, config_path=config_file, encoding=encod, generate_cohort=gen_cohort, val_size=size, cache_dir=cache)

Arguments

  1. task (string) :

    • Description: Specifies the task you want to perform with the dataset.
    • Default: "Mortality"
    • Note: Possible Values : 'Phenotype', 'Length of Stay', 'Readmission', 'Mortality'
  2. mimic_path (string) :

    • Description: Complete path to the Mimic-IV raw data on user's machine.
    • Note: You need to provide the appropriate path where the Mimic-IV data is stored. The path should end with the version of mimic (eg : mimiciv/2.2). Supported version : 2.2 and 1.0 as provided by the authors of the pipeline.
  3. config_path (string) optionnal :

    • Description: Path to the configuration file for the cohort generation choices (more infos in '/config/readme.md').
    • Default: Configuration file provided in the 'config' folder.
  4. encoding (string) optionnal :

    • Description: Data encoding option for the features.
    • Options: "concat", "aggreg", "tensor", "raw", "text"
    • Default: "concat"
    • Note: Choose one of the following options for data encoding:
      • "concat": Concatenates the one-hot encoded diagnoses, demographic data vector, and dynamic features at each measured time instant, resulting in a high-dimensional feature vector.
      • "aggreg": Concatenates the one-hot encoded diagnoses, demographic data vector, and dynamic features, where each item_id is replaced by the average of the measured time instants, resulting in a reduced-dimensional feature vector.
      • "tensor": Represents each feature as an 2D array. There are separate arrays for labels, demographic data ('DEMO'), diagnosis ('COND'), medications ('MEDS'), procedures ('PROC'), chart/lab events ('CHART/LAB'), and output events data ('OUT'). Dynamic features are represented as 2D arrays where each row contains values at a specific time instant.
      • "raw": Provide cohort from the pipeline without any encoding for custom data processing.
      • "text": Represents diagnoses as text suitable for BERT or other similar text-based models.
      • For 'concat' and 'aggreg' the composition of the vector is given in './data/dict/"task"/features_aggreg.csv' or './data/dict/"task"/features_concat.csv' file and in 'features_names' column of the dataset.
  5. generate_cohort (bool) optionnal :

    • Description: Determines whether to generate a new cohort from Mimic-IV data.
    • Default: True
    • Note: Set it to True to generate a cohort, or False to skip cohort generation.
  6. val_size, 'test_size' (float) optionnal :

    • Description: Proportion of the dataset used for validation during training.
    • Default: 0.1 for validation size and 0.2 for testing size.
    • Note: Can be set to 0.
  7. cache_dir (string) optionnal :

    • Description: Directory where the processed dataset will be cached.
    • Note: Providing a cache directory for each encoding type can avoid errors when changing the encoding type.

Example Usage

import datasets
from datasets import load_dataset

# Example 1: Load dataset with default settings
dataset = load_dataset('thbndi/Mimic4Dataset', task="Mortality", mimic_path="/path/to/mimic_data")

# Example 2: Load dataset with custom settings
dataset = load_dataset('thbndi/Mimic4Dataset', task="Phenotype", mimic_path="/path/to/mimic_data", config_path="/path/to/config_file", encoding="aggreg", generate_cohort=False, val_size=0.2, cache_dir="/path/to/cache_dir")

Please note that the provided examples are for illustrative purposes only, and you should adjust the paths and settings based on your actual dataset and specific use case.

Citations

Please if you use this dataset we would appreciate citations to the following paper :

@inproceedings{lovon-melgarejo-etal-2024-revisiting,
    title = "Revisiting the {MIMIC}-{IV} Benchmark: Experiments Using Language Models for Electronic Health Records",
    author = "Lovon-Melgarejo, Jesus  and
      Ben-Haddi, Thouria  and
      Di Scala, Jules  and
      Moreno, Jose G.  and
      Tamine, Lynda",
    editor = "Demner-Fushman, Dina  and
      Ananiadou, Sophia  and
      Thompson, Paul  and
      Ondov, Brian",
    booktitle = "Proceedings of the First Workshop on Patient-Oriented Language Processing (CL4Health) @ LREC-COLING 2024",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.cl4health-1.23/",
    pages = "189--196",
    abstract = "The lack of standardized evaluation benchmarks in the medical domain for text inputs can be a barrier to widely adopting and leveraging the potential of natural language models for health-related downstream tasks. This paper revisited an openly available MIMIC-IV benchmark for electronic health records (EHRs) to address this issue. First, we integrate the MIMIC-IV data within the Hugging Face datasets library to allow an easy share and use of this collection. Second, we investigate the application of templates to convert EHR tabular data to text. Experiments using fine-tuned and zero-shot LLMs on the mortality of patients task show that fine-tuned text-based models are competitive against robust tabular classifiers. In contrast, zero-shot LLMs struggle to leverage EHR representations. This study underlines the potential of text-based approaches in the medical field and highlights areas for further improvement."
}
Downloads last month
3,314