Spaces:
Runtime error
Runtime error
title: ROC Curve | |
emoji: π | |
colorFrom: yellow | |
colorTo: green | |
sdk: gradio | |
sdk_version: 3.17.0 | |
app_file: app.py | |
pinned: false | |
tags: | |
- evaluate | |
- metric | |
description: >- | |
Compute Receiver operating characteristic (ROC). | |
Note: this implementation is restricted to the binary classification task. | |
# Metric Card for Confusion Matrix | |
## Metric Description | |
Compute Receiver operating characteristic (ROC). | |
Note: this implementation is restricted to the binary classification task. | |
## How to Use | |
At minimum, this metric requires predictions and references as inputs. | |
```python | |
>>> cfm_metric = evaluate.load("BucketHeadP65/roc_curve") | |
>>> results = cfm_metric.compute(references=[1, 0, 1, 1, 0], prediction_scores=[0.1, 0.4, 0.6, 0.7, 0.1]) | |
>>> print(results) | |
{'roc_curve': (array([0. , 0. , 0. , 0.5, 1. ]), array([0. , 0.33333333, 0.66666667, 0.66666667, 1. ]), array([1.69999999, 0.69999999, 0.60000002, 0.40000001, 0.1 ]))} | |
``` | |
### Inputs | |
- **prediction_scores** (`list` of `float`): Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by "decision_function" on some classifiers). | |
- **references** (`list` of `int`): Ground truth labels. | |
- **pos_label** (`int` or `str`): default=None True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos_label should be explicitly given. | |
- **sample_weight** (`list` of `float`): Sample weights Defaults to None. | |
- **drop_intermediate** (`bool`): default=True | |
Whether to drop some suboptimal thresholds which would not appear | |
on a plotted ROC curve. This is useful in order to create lighter | |
ROC curves. | |
### Output Values | |
- **fpr** (`ndarray`): Increasing false positive rates such that element i is the false | |
positive rate of predictions with score >= `thresholds[i]`. | |
- **tpr** (`ndarray`): Increasing true positive rates such that element `i` is the true | |
positive rate of predictions with score >= `thresholds[i]`. | |
- **thresholds** (`ndarray`): Decreasing thresholds on the decision function used to compute | |
`fpr` and `tpr`. `thresholds[0]` represents no instances being predicted | |
and is arbitrarily set to `max(y_score) + 1`. | |
Output Example(s): | |
```python | |
'roc_curve': (array([0. , 0. , 0. , 0.5, 1. ]), array([0. , 0.33333333, 0.66666667, 0.66666667, 1. ]), array([1.69999999, 0.69999999, 0.60000002, 0.40000001, 0.1 ]))} | |
``` | |
This metric outputs a dictionary, containing the fpr, tpr and thresholds. | |
## Citation(s) | |
```bibtex | |
@article{scikit-learn, | |
title={Scikit-learn: Machine Learning in {P}ython}, | |
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. | |
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. | |
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and | |
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, | |
journal={Journal of Machine Learning Research}, | |
volume={12}, | |
pages={2825--2830}, | |
year={2011} | |
} | |
``` | |
## Further References | |
Wikipedia entry for the Confusion matrix | |
<https://en.wikipedia.org/wiki/Confusion_matrix>`_ | |
(Wikipedia and other references may use a different | |
convention for axes). |