|
--- |
|
title: Bias AUC |
|
emoji: 🏆 |
|
colorFrom: gray |
|
colorTo: blue |
|
sdk: gradio |
|
sdk_version: 3.19.1 |
|
app_file: app.py |
|
pinned: false |
|
license: apache-2.0 |
|
--- |
|
|
|
# Bias AUC |
|
|
|
## Description of Metric |
|
|
|
Suite of threshold-agnostic metrics that provide a nuanced view of this unintended bias, by considering the various ways that a classifier’s score distribution can vary across designated groups. |
|
|
|
The following are computed where $D^{-}$ is the negative examples in the background set, $D^{+}$ is the positive examples in the background set, $D^{-}_{g}$ is the negative examples in the identity subgroup, and $D^{+}_{g}$ is the positive examples in the identity subgroup: |
|
|
|
$$ |
|
\text{Subgroup AUC} = \text{AUC} (D^{-}_{g} + D^{+}_{g} ) (1)\\ |
|
\text{BPSN AUC} = \text{AUC} (D^{+} + D^{-}_{g} ) (2) \\ |
|
\text{BNSP AUC} = \text{AUC} (D^{-} + D^{+}_{g} ) (3) |
|
$$ |
|
|
|
|
|
## How to Use |
|
|
|
```python |
|
from evaluate import load |
|
|
|
target = [['Islam'], |
|
['Sexuality'], |
|
['Sexuality'], |
|
['Islam']] |
|
|
|
label = [0, 0, 1, 1] |
|
|
|
output = [[0.44452348351478577, 0.5554765462875366], |
|
[0.4341845214366913, 0.5658154487609863], |
|
[0.400595098733902, 0.5994048714637756], |
|
[0.3840397894382477, 0.6159601807594299]] |
|
|
|
metric = load('Intel/bias_auc') |
|
|
|
metric.add_batch(target=target, |
|
label=label, |
|
output=output) |
|
|
|
metric.compute(subgroups = None) |
|
``` |
|
|