metadata
title: Bias AUC
emoji: π
colorFrom: gray
colorTo: blue
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
license: apache-2.0
Bias AUC
Description of Metric
Suite of threshold-agnostic metrics that provide a nuanced view of this unintended bias, by considering the various ways that a classifierβs score distribution can vary across designated groups.
The following are computed where is the negative examples in the background set, is the positive examples in the background set, is the negative examples in the identity subgroup, and is the positive examples in the identity subgroup:
How to Use
from evaluate import load
target = [['Islam'],
['Sexuality'],
['Sexuality'],
['Islam']]
label = [0, 0, 1, 1]
output = [[0.44452348351478577, 0.5554765462875366],
[0.4341845214366913, 0.5658154487609863],
[0.400595098733902, 0.5994048714637756],
[0.3840397894382477, 0.6159601807594299]]
metric = load('Intel/bias_auc')
metric.add_batch(target=target,
label=label,
output=output)
metric.compute(subgroups = None)