The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
MIMIR
These datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models.
π Applicability
The datasets can be applied to any model trained on The Pile, including (but not limited to):
- GPTNeo
- Pythia
- OPT
Loading the datasets
To load the dataset:
from datasets import load_dataset
dataset = load_dataset("Al-not-AI/mimir", "pile_cc", split="ngram_7_0.2")
- Available Names:
arxiv
,dm_mathematics
,github
,hackernews
,pile_cc
,pubmed_central
,wikipedia_(en)
,full_pile
,c4
,temporal_arxiv
,temporal_wiki
- Available Splits:
ngram_7_0.2
,ngram_13_0.2
,ngram_13_0.8
(for most sources), 'none' (for other sources) - Available Features:
member
(str),nonmember
(str),member_neighbors
(List[str]),nonmember_neighbors
(List[str])
This dataset is forked from a respository linked with this paper:
@inproceedings{duan2024membership,
title={Do Membership Inference Attacks Work on Large Language Models?},
author={Michael Duan and Anshuman Suri and Niloofar Mireshghallah and Sewon Min and Weijia Shi and Luke Zettlemoyer and Yulia Tsvetkov and Yejin Choi and David Evans and Hannaneh Hajishirzi},
year={2024},
booktitle={Conference on Language Modeling (COLM)},
}
The only cahange is in the processing script, the feature elements are now input
and label
(indicates whether the input datapoint is member or nonmember),
- Downloads last month
- 83