geometric_mean / README.md
mfumanelli's picture
Updating module
4560eb1

A newer version of the Gradio SDK is available: 5.5.0

Upgrade
metadata
title: Geometric Mean
emoji: 🤗
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 3.0.2
app_file: app.py
pinned: false
tags:
  - evaluate
  - metric
description: >-
  The geometric mean (G-mean) is the root of the product of class-wise
  sensitivity.  

Metric Card for Geometric Mean

Metric Description

The geometric mean (G-mean) is the root of the product of class-wise sensitivity. This measure tries to maximize the accuracy on each of the classes while keeping these accuracies balanced. For binary classification G-mean is the squared root of the product of the sensitivity and specificity.

How to Use

At minimum, this metric requires predictions and references as input

>>> gmean_metric = evaluate.load("geometric_mean")
>>> results = gmean_metric.compute(predictions=[0, 1], references=[0, 1])
>>> print(results)
["{'geometric-mean': 1.0}"]

Inputs

  • predictions (list of int): Predicted labels.
  • references (list of int): Ground truth labels.
  • labels (list of int): The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. Defaults to None.
  • pos_label (string or int): The class to report if average='binary' and the data is binary. If the data are multiclass, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only. Defaults to 1.
  • average (string): If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to 'multiclass'.
    • 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary.
    • 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
    • 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
    • 'weighted': Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label).
    • 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score).
  • sample_weight (list of float): Sample weights. Defaults to None.
  • correction (float): Substitutes sensitivity of unrecognized classes from zero to a given value. Defaults to 0.0.

Output Values

  • geometric_mean (float or array of float): geometric mean score or list of geometric mean scores, depending on the value passed to average. Minimum possible value is 0. Maximum possible value is 1. Higher geometric mean scores are better.

Output Example:

{'geometric_mean': 0.4714045207910317}

Examples

Example 1-A simple binary example

>>> geometric_mean = evaluate.load("geometric_mean")
>>> results = geometric_mean.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
>>> print(round(res['geometric-mean'], 2))
0.58

Example 2-The same simple binary example as in Example 1, but with sample_weight included.

>>> geometric_mean = evaluate.load("geometric_mean")
>>> results = geometric_mean.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
>>> print(round(results['geometric-mean'], 2))
0.35

Example 3-A multiclass example, with average equal to macro.

>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = geometric_mean.compute(predictions=predictions, references=references, average="macro")
>>> print(round(results['geometric-mean'], 2))
0.47

Limitations and Bias

Note any known limitations or biases that the metric has, with links and references if possible.

Citation(s)

@article{imbalanced-learn,
  title={Imbalanced-learn: A Python Toolbox to Tackle the Curse of
Imbalanced Datasets in Machine Learning},
  author={Lemaˆıtre, G. and Nogueira, F. and Aridas, C.},
  journal={Journal of Machine Learning Research},
  volume={18},
  pages={1-5},
  year={2017}
}

Further References

Add any useful further references.