Metadata
Browse files- CODE_OF_CONDUCT.md +9 -0
- LICENSE +21 -0
- README.md +358 -3
- SECURITY.md +41 -0
- SUPPORT.md +13 -0
- logo.png +3 -0
CODE_OF_CONDUCT.md
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Microsoft Open Source Code of Conduct
|
2 |
+
|
3 |
+
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
|
4 |
+
|
5 |
+
Resources:
|
6 |
+
|
7 |
+
- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
|
8 |
+
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
|
9 |
+
- Contact [[email protected]](mailto:[email protected]) with questions or concerns
|
LICENSE
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MIT License
|
2 |
+
|
3 |
+
Copyright (c) Microsoft Corporation.
|
4 |
+
|
5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6 |
+
of this software and associated documentation files (the "Software"), to deal
|
7 |
+
in the Software without restriction, including without limitation the rights
|
8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9 |
+
copies of the Software, and to permit persons to whom the Software is
|
10 |
+
furnished to do so, subject to the following conditions:
|
11 |
+
|
12 |
+
The above copyright notice and this permission notice shall be included in all
|
13 |
+
copies or substantial portions of the Software.
|
14 |
+
|
15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21 |
+
SOFTWARE
|
README.md
CHANGED
@@ -1,3 +1,358 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license:
|
5 |
+
- mit
|
6 |
+
pretty_name: MICO Membership Inference Competition -- Differential Privacy Distinguisher
|
7 |
+
size_categories:
|
8 |
+
- n<1K
|
9 |
+
source-datasets:
|
10 |
+
- https://www.comp.nus.edu.sg/~reza/files/datasets.html
|
11 |
+
- https://pytorch.org/vision/master/generated/torchvision.datasets.CIFAR10.html
|
12 |
+
- https://huggingface.co/datasets/nyu-mll/glue
|
13 |
+
tags:
|
14 |
+
- membership-inference
|
15 |
+
- privacy
|
16 |
+
- differential-privacy
|
17 |
+
task_categories:
|
18 |
+
- tabular-classification
|
19 |
+
- image-classification
|
20 |
+
- text-classification
|
21 |
+
viewer: false
|
22 |
+
configs:
|
23 |
+
- config_name: purchase100
|
24 |
+
data_files:
|
25 |
+
- split: train
|
26 |
+
path: purchase100_ddp/train
|
27 |
+
- split: dev
|
28 |
+
path: purchase100_ddp/dev
|
29 |
+
- split: final
|
30 |
+
path: purchase100_ddp/final
|
31 |
+
- config_name: cifar10
|
32 |
+
data_files:
|
33 |
+
- split: train
|
34 |
+
path: cifar10_ddp/train
|
35 |
+
- split: dev
|
36 |
+
path: cifar10_ddp/dev
|
37 |
+
- split: final
|
38 |
+
path: cifar10_ddp/final
|
39 |
+
- config_name: sst2
|
40 |
+
data_files:
|
41 |
+
- split: train
|
42 |
+
path: sst2_ddp/train
|
43 |
+
- split: dev
|
44 |
+
path: sst2_ddp/dev
|
45 |
+
- split: final
|
46 |
+
path: sst2_ddp/final
|
47 |
+
---
|
48 |
+
|
49 |
+
# MICO Differential Privacy Distinguisher challenge dataset
|
50 |
+
|
51 |
+
![Mico Argentatus (Silvery Marmoset) - William Warby/Flickr](logo.png)
|
52 |
+
Mico Argentatus (Silvery Marmoset) - William Warby/Flickr
|
53 |
+
|
54 |
+
## Dataset Description
|
55 |
+
|
56 |
+
- **Repository**: https://github.com/microsoft/MICO/
|
57 |
+
|
58 |
+
For the **accompanying code**, visit the GitHub repository of the competition: [https://github.com/microsoft/MICO/](https://github.com/microsoft/MICO/).
|
59 |
+
|
60 |
+
## Getting Started
|
61 |
+
|
62 |
+
The starting kit notebook for this task is available at: [https://github.com/microsoft/MICO/tree/main/starting-kit](https://github.com/microsoft/MICO/tree/main/starting-kit).
|
63 |
+
|
64 |
+
In the starting kit notebook you will find a walk-through of how to load the data and make your first submission.
|
65 |
+
We also provide a library for loading the data with the appropriate splits. This section describes the dataset splits, model training, and answer submission format.
|
66 |
+
|
67 |
+
|
68 |
+
## Challenge Construction
|
69 |
+
|
70 |
+
For each dataset and each $\varepsilon$ value, we trained 200 different models.
|
71 |
+
Each model was trained on a different split of the dataset, which is defined by three seed values: `seed_challenge`, `seed_training`, `seed_membership`.
|
72 |
+
The diagram below illustrates the splits.
|
73 |
+
Each arrow denotes a call to `torch.utils.data.random_split` and the labels on the arrows indicate the number of records in each split e.g. `N = len(dataset)`:
|
74 |
+
|
75 |
+
```
|
76 |
+
Parameters:
|
77 |
+
- `challenge` : `2m` challenge examples (m = 100)
|
78 |
+
- `nonmember` : `m` non-members challenge examples from `challenge`
|
79 |
+
- `member` : `m` member challenge examples, from `challenge`
|
80 |
+
- `training` : non-challenge examples to use for model training
|
81 |
+
- `evaluation`: non-challenge examples to use for model evaluation
|
82 |
+
|
83 |
+
┌────────────────────────────────────────────────────────────┐
|
84 |
+
│ dataset │
|
85 |
+
└──────────────────────────────┬─────────────────────────────┘
|
86 |
+
│ N
|
87 |
+
seed_challenge │
|
88 |
+
┌───────────────────┴───────────┐
|
89 |
+
│ 2m │ N - 2m
|
90 |
+
▼ ▼
|
91 |
+
┌───────────────────┬────────────────────────────────────────┐
|
92 |
+
│ challenge │ rest │
|
93 |
+
└─────────┬─────────┴────────────────────┬───────────────────┘
|
94 |
+
│ 2m │ N - 2m
|
95 |
+
seed_membership │ seed_training │
|
96 |
+
┌────┴────┐ ┌───────┴─────────┐
|
97 |
+
│ m │ m │ n - m │ N - n - m
|
98 |
+
▼ ▼ ▼ ▼
|
99 |
+
┌──────────┬─────────┬───────────────────┬────────────────────┐
|
100 |
+
│nonmember │ member │ training │ evaluation │
|
101 |
+
└──────────┴─────────┴───────────────────┴────────���───────────┘
|
102 |
+
```
|
103 |
+
|
104 |
+
Models are trained on `member + training` and evaluated on `evaluation`.
|
105 |
+
Standard scenarios disclose `challenge` (equivalently, `seed_challenge`).
|
106 |
+
DP distinguisher scenarios also disclose `training` and `evaluation` (equivalently, `seed_training`).
|
107 |
+
The ground truth (i.e., `nonmember` and `member`) can be recovered from `seed_membership`.
|
108 |
+
|
109 |
+
The 200 models are split into 3 sets:
|
110 |
+
|
111 |
+
- `train` [`model_0` ... `model_99`]: for these models, we provide *full* information (including `seed_membership`). They can be used for training your attack (e.g., shadow models).
|
112 |
+
- `dev` [`model_100` ... `model_149`]: these models are used for the live scoreboard. Performance on these models has no effect in the final ranking.
|
113 |
+
- `final` [`model_150` ... `model_199`]: these models are used for deciding the final winners. Attack performance on these models will be only be revealed at the end of the competition.
|
114 |
+
|
115 |
+
|
116 |
+
## Challenge Data
|
117 |
+
|
118 |
+
The challenge data provided to participants is arranged as follows:
|
119 |
+
|
120 |
+
- `train/`
|
121 |
+
- `model_0/`
|
122 |
+
- `seed_challenge`: Given this seed, you'll be able to retrieve the challenge points.
|
123 |
+
- `seed_training`: Given this seed, you'll be able to retrieve the training points (excluding 50% of the challenge points).
|
124 |
+
- `seed_membership`: Given this seed, you'll be able to retrieve the true membership of the challenge points.
|
125 |
+
- `model.pt`: The trained model. (Equivalently, `pytorch_model.bin` and `config.json` for text classification models.)
|
126 |
+
- `solution.csv`: A list of `{0,1}` values, indicating the true membership of the challenge points.
|
127 |
+
- ...
|
128 |
+
- `model_99`
|
129 |
+
- ...
|
130 |
+
|
131 |
+
- `dev/`: Used for live scoring.
|
132 |
+
- `model_100`
|
133 |
+
- `seed_challenge`
|
134 |
+
- `model.pt` (or `pytorch_model.bin` and `config.json`)
|
135 |
+
- ...
|
136 |
+
- `model_149`
|
137 |
+
- ...
|
138 |
+
|
139 |
+
- `final/`: Used for final scoring, which will be used to determine the winner.
|
140 |
+
- `model_150`:
|
141 |
+
- `seed_challenge`
|
142 |
+
- `model.pt` (or `pytorch_model.bin` and `config.json`)
|
143 |
+
- ...
|
144 |
+
- `model_199`:
|
145 |
+
- ...
|
146 |
+
|
147 |
+
`train` data is provided for your convenience: it contains full information about the membership of the challenge points.
|
148 |
+
You can use it for developing your attack (e.g. as shadow models).
|
149 |
+
|
150 |
+
You can load the public datasets and individual models and their associated challenge data using the functions provided by the `mico-competition` package in the [accompanying repository](https://github.com/microsoft/MICO) (i.e., `loda_cifar10`, `load_model`, `ChallengeDataset.from_path`, etc.)
|
151 |
+
Please refer to the starting kit for more information.
|
152 |
+
|
153 |
+
|
154 |
+
## Predictions
|
155 |
+
|
156 |
+
You must submit predictions for `dev` and `final` data.
|
157 |
+
These will be used for live scoring and final scoring respectively.
|
158 |
+
|
159 |
+
Predictions should be provided in **a single `.zip` file** containing the following structure:
|
160 |
+
|
161 |
+
- `dev/`: Used for live scoring.
|
162 |
+
- `model_100`
|
163 |
+
- `predictions.csv`: Provided by the participant. A list of values between 0 and 1, indicating membership confidence for each challenge point. Each value must be a floating point number in the range `[0.0, 1.0]`, where `1.0` indicates certainty that the challenge point is a member, and `0.0` indicates certainty that it is a non-member.
|
164 |
+
- `model_101`
|
165 |
+
- `predictions.csv`
|
166 |
+
- ...
|
167 |
+
- `final/`: Used for final scoring, which will be used to determine the winners.
|
168 |
+
- `model_150`
|
169 |
+
- `predictions.csv`: Provided by the participant. A list of confidence values between 0 and 1, indicating membership confidence for each challenge point. Each value must be a floating point number in the range `[0.0, 1.0]`, where `1.0` indicates certainty that the challenge point is a member, and `0.0` indicates certainty that it is a non-member.
|
170 |
+
- ...
|
171 |
+
|
172 |
+
The starting kit notebooks in the [accompanying repository](https://github.com/microsoft/MICO) provide example code for preparing a submission.
|
173 |
+
|
174 |
+
**IMPORTANT: predictions for `dev` and `final` models must be provided for every submission you make.**
|
175 |
+
|
176 |
+
|
177 |
+
## General Information
|
178 |
+
|
179 |
+
🥇🥈[**Winners Announced!**](https://microsoft.github.io/MICO/)
|
180 |
+
|
181 |
+
Welcome to the Microsoft Membership Inference Competition (MICO)!
|
182 |
+
|
183 |
+
In this competition, you will evaluate the effectiveness of differentially private model training as a mitigation against white-box membership inference attacks.
|
184 |
+
|
185 |
+
* [What is Membership Inference?](#what-is-membership-inference)
|
186 |
+
* [What is MICO?](#what-is-mico)
|
187 |
+
* [Task Details](#task-details)
|
188 |
+
* [Submissions and Scoring](#submissions-and-scoring)
|
189 |
+
* [Winner Selection](#winner-selection)
|
190 |
+
* [Important Dates](#important-dates)
|
191 |
+
* [Terms and Conditions](#terms-and-conditions)
|
192 |
+
* [CodaLab Competitions](#codalab-competitions)
|
193 |
+
* [Getting Started](#getting-started)
|
194 |
+
* [Contact](#contact)
|
195 |
+
* [Contributing](#contributing)
|
196 |
+
* [Trademarks](#trademarks)
|
197 |
+
|
198 |
+
## What is Membership Inference?
|
199 |
+
|
200 |
+
Membership inference is a widely-studied class of threats against Machine Learning (ML) models.
|
201 |
+
The goal of a membership inference attack is to infer whether a given record was used to train a specific ML model.
|
202 |
+
An attacker might have full access to the model and its weights (known as "white-box" access), or might only be able to query the model on inputs of their choice ("black-box" access).
|
203 |
+
In either case, a successful membership inference attack could have negative consequences, especially if the model was trained on sensitive data.
|
204 |
+
|
205 |
+
Membership inference attacks vary in complexity.
|
206 |
+
In a simple case, the model might have overfitted to its training data, so that it outputs higher confidence predictions when queried on training records than when queried on records that the model has not seen during training.
|
207 |
+
Recognizing this, an attacker could simply query the model on records of their interest, establish a threshold on the model's confidence, and infer that records with higher confidence are likely members of the training data.
|
208 |
+
In a white-box setting, as is the case for this competition, the attacker can use more sophisticated strategies that exploit access to the internals of the model.
|
209 |
+
|
210 |
+
|
211 |
+
## What is MICO?
|
212 |
+
|
213 |
+
In MICO, your goal is to perform white-box membership inference against a series of trained ML models that we provide.
|
214 |
+
Specifically, given a model and a set of *challenge points*, the aim is to decide which of these challenge points were used to train the model.
|
215 |
+
|
216 |
+
You can compete on any of four separate membership inference tasks against classification models for image, text, and tabular data, as well as on a special _Differential Privacy Distinguisher_ task spanning all 3 modalities.
|
217 |
+
Each task will be scored separately.
|
218 |
+
You do not need to participate in all of them, and can choose to participate in as many as you like.
|
219 |
+
Throughout the competition, submissions will be scored on a subset of the evaluation data and ranked on a live scoreboard.
|
220 |
+
When submission closes, the final scores will be computed on a separate subset of the evaluation data.
|
221 |
+
|
222 |
+
The winner of each task will be eligible for an award of **$2,000 USD** from Microsoft and the runner-up of each task for an award of **$1,000 USD** from Microsoft (in the event of tied entries, these awards may be adjusted).
|
223 |
+
This competition is co-located with the [IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) 2023](https://satml.org/), and the winners will be invited to present their strategies at the conference.
|
224 |
+
|
225 |
+
|
226 |
+
## Task Details
|
227 |
+
|
228 |
+
For each of the four tasks, we provide a set of models trained on different splits of a public dataset.
|
229 |
+
For each of these models, we provide `m` challenge points; exactly half of which are _members_ (i.e., used to train the model) and half are _non-members_ (i.e., they come from the same dataset, but were not used to train the model).
|
230 |
+
Your goal is to determine which challenge points are members and which are non-members.
|
231 |
+
|
232 |
+
Each of the first three tasks consists of three different _scenarios_ with increasing difficulty, determined by the differential privacy guarantee of the algorithm used to train target models: $\varepsilon = \infty$, high $\varepsilon$, and low $\varepsilon$.
|
233 |
+
All scenarios share the same model architecture and are trained for the same number of epochs.
|
234 |
+
The $\varepsilon = \infty$ scenario uses Stochastic Gradient Descent (SGD) without any differential privacy guarantee, while the high $\varepsilon$ and low $\varepsilon$ scenarios use Differentially-Private SGD with a high and low privacy budget $\varepsilon$, respectively.
|
235 |
+
The lower the privacy budget $\varepsilon$, the more _private_ the model.
|
236 |
+
|
237 |
+
In the fourth task, the target models span all three modalities (image, text, and tabular data) and are trained with a low privacy budget.
|
238 |
+
The model architectures and hyperparameters are the same as for first three tasks.
|
239 |
+
However, we reveal the training data of models except for the `m/2` member challenge points.
|
240 |
+
|
241 |
+
|
242 |
+
| Task | Scenario | Dataset | Model Architecture | $\varepsilon$ | Other training points given |
|
243 |
+
| :--- | :----: | :----: | :----: | :----: | :----: |
|
244 |
+
| Image | I1 | CIFAR-10 | 4-layer CNN | $\infty$ | No |
|
245 |
+
| | I2 | CIFAR-10 | 4-layer CNN | High | No |
|
246 |
+
| | I3 | CIFAR-10 | 4-layer CNN | Low | No |
|
247 |
+
| Text | X1 | SST-2 | Roberta-Base | $\infty$ | No |
|
248 |
+
| | X2 | SST-2 | Roberta-Base | High | No |
|
249 |
+
| | X3 | SST-2 | Roberta-Base | Low | No |
|
250 |
+
| Tabular Data | T1 | Purchase-100 | 3-layer fully connected NN | $\infty$ | No |
|
251 |
+
| | T2 | Purchase-100 | 3-layer fully connected NN | High | No |
|
252 |
+
| | T3 | Purchase-100 | 3-layer fully connected NN | Low | No |
|
253 |
+
| DP Distinguisher | D1 | CIFAR-10 | 4-layer CNN | Low | Yes |
|
254 |
+
| | D2 | SST-2 | Roberta-Base | Low | Yes |
|
255 |
+
| | D3 | Purchase-100 | 3-layer fully connected NN | Low | Yes |
|
256 |
+
|
257 |
+
|
258 |
+
## Submissions and Scoring
|
259 |
+
|
260 |
+
Submissions will be ranked based on their performance in white-box membership inference against the provided models.
|
261 |
+
|
262 |
+
There are three sets of challenges: `train`, `dev`, and `final`.
|
263 |
+
For models in `train`, we reveal the full training dataset, and consequently the ground truth membership data for challenge points.
|
264 |
+
These models can be used by participants to develop their attacks.
|
265 |
+
For models in the `dev` and `final` sets, no ground truth is revealed and participants must submit their membership predictions for challenge points.
|
266 |
+
|
267 |
+
During the competition, there will be a live scoreboard based on the `dev` challenges.
|
268 |
+
The final ranking will be decided on the `final` set; scoring for this dataset will be withheld until the competition ends.
|
269 |
+
|
270 |
+
For each challenge point, the submission must provide a value, indicating the confidence level with which the challenge point is a member.
|
271 |
+
Each value must be a floating point number in the range `[0.0, 1.0]`, where `1.0` indicates certainty that the challenge point is a member, and `0.0` indicates certainty that it is a non-member.
|
272 |
+
|
273 |
+
Submissions will be evaluated according to their **True Positive Rate at 10% False Positive Rate** (i.e. `TPR @ 0.1 FPR`).
|
274 |
+
In this context, *positive* challenge points are members and *negative* challenge points are non-members.
|
275 |
+
For each submission, the scoring program concatenates the confidence values for all models (`dev` and `final` treated separately) and compares these to the reference ground truth.
|
276 |
+
The scoring program determines the minimum confidence threshold for membership such that at most 10% of the non-member challenge points are incorrectly classified as members.
|
277 |
+
The score is the True Positive Rate achieved by this threshold (i.e., the proportion of correctly classified member challenge points).
|
278 |
+
The live scoreboard shows additional scores (i.e., TPR at other FPRs, membership inference advantage, accuracy, AUC-ROC score), but these are only informational.
|
279 |
+
|
280 |
+
You are allowed to make multiple submissions, but only your latest submission will be considered.
|
281 |
+
In order for a submission to be valid, you must submit confidence values for all challenge points in all three scenarios of the task.
|
282 |
+
|
283 |
+
Hints and tips:
|
284 |
+
- We do realize that the score of a submission leaks some information about the ground truth.
|
285 |
+
However, using this information to optimize a submission based only on the live scoreboard (i.e., on `dev`) is a bad strategy, as this score has no relevance on the final ranking.
|
286 |
+
- Pay a special attention to the evaluation metric (`TPR @ 0.1 FPR`).
|
287 |
+
Your average accuracy at predicting membership in general may be misleading. Your attack should aim to maximize the number of predicted members whilst remaining below the specified FPR.
|
288 |
+
|
289 |
+
|
290 |
+
## Winner Selection
|
291 |
+
|
292 |
+
Winners will be selected independently for each task (i.e. if you choose not to participate in certain tasks, this will not affect your rank for the tasks in which you do participate).
|
293 |
+
For each task, the winner will be the one achieving the highest average score (`TPR @ 0.1 FPR`) across the three scenarios.
|
294 |
+
|
295 |
+
|
296 |
+
## Important Dates
|
297 |
+
|
298 |
+
- Submission opens: November 8, 2022
|
299 |
+
- Submission closes: ~**January 12, 2023, 23:59 (Anywhere on Earth)**~ **January 26, 2023, 23:59 (Anywhere on Earth)**
|
300 |
+
- Conference: February 8-10, 2023
|
301 |
+
|
302 |
+
|
303 |
+
## Terms and Conditions
|
304 |
+
|
305 |
+
- This challenge is subject to the [Microsoft Bounty Terms and Conditions](https://www.microsoft.com/en-us/msrc/bounty-terms).
|
306 |
+
|
307 |
+
- Microsoft employees and students/employees of Imperial College London may submit solutions, but are not eligible to receive awards.
|
308 |
+
|
309 |
+
- Submissions will be evaluated by a panel of judges according to the aims of the competition.
|
310 |
+
|
311 |
+
- Winners may be asked to provide their code and/or a description of their strategy to the judges for verification purposes.
|
312 |
+
|
313 |
+
## CodaLab Competitions
|
314 |
+
|
315 |
+
- [Image (CIFAR-10)](https://codalab.lisn.upsaclay.fr/competitions/8551)
|
316 |
+
- [Text (SST-2)](https://codalab.lisn.upsaclay.fr/competitions/8554)
|
317 |
+
- [Tabular Data (Purchase-100)](https://codalab.lisn.upsaclay.fr/competitions/8553)
|
318 |
+
- [DP Distinguisher](https://codalab.lisn.upsaclay.fr/competitions/8552)
|
319 |
+
|
320 |
+
## Getting Started
|
321 |
+
|
322 |
+
First, register on CodaLab for the tasks in which you would like to participate.
|
323 |
+
Once registered, you will be given URLs from which to download the challenge data.
|
324 |
+
|
325 |
+
This repository contains starting kit Jupyter notebooks which will guide you through making your first submission.
|
326 |
+
To use it, clone this repository and follow the steps below:
|
327 |
+
- `pip install -r requirements.txt`. You may want to do this in a [virtualenv](https://docs.python.org/3/library/venv.html).
|
328 |
+
- `pip install -e .`
|
329 |
+
- `cd starting-kit/`
|
330 |
+
- `pip install -r requirements-starting-kit.txt`
|
331 |
+
- The corresponding starting kit notebook illustrates how to load the challenge data, run a basic membership inference attack, and prepare an archive to submit to CodaLab.
|
332 |
+
|
333 |
+
|
334 |
+
## Contact
|
335 |
+
|
336 |
+
For any additional queries or suggestions, please contact [[email protected]]([email protected]).
|
337 |
+
|
338 |
+
|
339 |
+
## Contributing
|
340 |
+
|
341 |
+
This project welcomes contributions and suggestions.
|
342 |
+
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
|
343 |
+
|
344 |
+
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment).
|
345 |
+
Simply follow the instructions provided by the bot.
|
346 |
+
You will only need to do this once across all repos using our CLA.
|
347 |
+
|
348 |
+
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
|
349 |
+
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
|
350 |
+
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
|
351 |
+
|
352 |
+
|
353 |
+
## Trademarks
|
354 |
+
|
355 |
+
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow
|
356 |
+
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
|
357 |
+
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
|
358 |
+
Any use of third-party trademarks or logos are subject to those third-party's policies.
|
SECURITY.md
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK -->
|
2 |
+
|
3 |
+
## Security
|
4 |
+
|
5 |
+
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
|
6 |
+
|
7 |
+
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
|
8 |
+
|
9 |
+
## Reporting Security Issues
|
10 |
+
|
11 |
+
**Please do not report security vulnerabilities through public GitHub issues.**
|
12 |
+
|
13 |
+
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
|
14 |
+
|
15 |
+
If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
|
16 |
+
|
17 |
+
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
|
18 |
+
|
19 |
+
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
|
20 |
+
|
21 |
+
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
|
22 |
+
* Full paths of source file(s) related to the manifestation of the issue
|
23 |
+
* The location of the affected source code (tag/branch/commit or direct URL)
|
24 |
+
* Any special configuration required to reproduce the issue
|
25 |
+
* Step-by-step instructions to reproduce the issue
|
26 |
+
* Proof-of-concept or exploit code (if possible)
|
27 |
+
* Impact of the issue, including how an attacker might exploit the issue
|
28 |
+
|
29 |
+
This information will help us triage your report more quickly.
|
30 |
+
|
31 |
+
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
|
32 |
+
|
33 |
+
## Preferred Languages
|
34 |
+
|
35 |
+
We prefer all communications to be in English.
|
36 |
+
|
37 |
+
## Policy
|
38 |
+
|
39 |
+
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
|
40 |
+
|
41 |
+
<!-- END MICROSOFT SECURITY.MD BLOCK -->
|
SUPPORT.md
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Support
|
2 |
+
|
3 |
+
## How to file issues and get help
|
4 |
+
|
5 |
+
This project uses GitHub Issues to track bugs and feature requests. Please search the existing
|
6 |
+
issues before filing new issues to avoid duplicates. For new issues, file your bug or
|
7 |
+
feature request as a new Issue.
|
8 |
+
|
9 |
+
For help and questions about using this project, please send email to [email protected]
|
10 |
+
|
11 |
+
## Microsoft Support Policy
|
12 |
+
|
13 |
+
Support for this project is limited to the resources listed above.
|
logo.png
ADDED
Git LFS Details
|