File size: 1,665 Bytes
4ce3449
 
58b025f
 
 
 
 
 
 
 
4ce3449
58b025f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2aa3c68
58b025f
 
238142b
55f6cce
 
58b025f
 
2aa3c68
f092391
 
02500d3
f092391
 
 
02500d3
f092391
2aa3c68
 
23e9a45
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: mit
language:
- en
tags:
- membership inference
- privacy
pretty_name: MIMIR
size_categories:
- 1K<n<10K
---

# MIMIR

These datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models.

## 📌 Applicability

The datasets can be applied to any model trained on The Pile, including (but not limited to):
- GPTNeo
- Pythia
- OPT

## Loading the datasets
To load the dataset:

```python
from datasets import load_dataset

dataset = load_dataset("Al-not-AI/mimir", "pile_cc", split="ngram_7_0.2")
```

- Available Names: `arxiv`, `dm_mathematics`, `github`, `hackernews`, `pile_cc`, `pubmed_central`, `wikipedia_(en)`, `full_pile`, `c4`, `temporal_arxiv`, `temporal_wiki`
- Available Splits: `ngram_7_0.2`, `ngram_13_0.2`, `ngram_13_0.8` (for most sources), 'none' (for other sources)
- Available Features: `member` (str), `nonmember` (str), `member_neighbors` (List[str]), `nonmember_neighbors` (List[str])


This dataset is forked from a respository linked with this paper:

```bibtex
@inproceedings{duan2024membership,
      title={Do Membership Inference Attacks Work on Large Language Models?}, 
      author={Michael Duan and Anshuman Suri and Niloofar Mireshghallah and Sewon Min and Weijia Shi and Luke Zettlemoyer and Yulia Tsvetkov and Yejin Choi and David Evans and Hannaneh Hajishirzi},
      year={2024},
      booktitle={Conference on Language Modeling (COLM)},
}
```

The only cahange is in the processing script, the feature elements are now `input` and `label` (indicates whether the input datapoint is member or nonmember),