Spaces:
Sleeping
Sleeping
File size: 6,238 Bytes
1c14bfe 447a3ce 9d444ef 447a3ce 1c14bfe 9d444ef 1c14bfe e54639b 1c14bfe 9d444ef c41e19f 5b34f1e c41e19f 5b34f1e c41e19f 5b34f1e c41e19f dd20e99 c41e19f 9d444ef dd20e99 c41e19f 88af9ca c41e19f 9d444ef dd20e99 e0e73ce dd20e99 1a05115 dd20e99 9d444ef dd20e99 e0e73ce dd20e99 9d444ef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
---
title: panoptic-quality
tags:
- evaluate
- metric
description: PanopticQuality score
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
emoji: πΌοΈ
---
# SEA-AI/PanopticQuality
This hugging face metric uses `seametrics.segmentation.PanopticQuality` under the hood to compute a panoptic quality score. It is a wrapper class for the torchmetrics class [`torchmetrics.detection.PanopticQuality`](https://lightning.ai/docs/torchmetrics/stable/detection/panoptic_quality.html).
## Getting Started
To get started with PanopticQuality, make sure you have the necessary dependencies installed. This metric relies on the `evaluate`, `seametrics` and `seametrics[segmentation]`libraries for metric calculation and integration with FiftyOne datasets.
### Basic Usage
```python
>>> import evaluate
>>> from seametrics.payload.processor import PayloadProcessor
>>> MODEL_FIELD = ["maskformer-27k-100ep"]
>>> payload = PayloadProcessor("SAILING_PANOPTIC_DATASET_QA",
>>> gt_field="ground_truth_det",
>>> models=MODEL_FIELD,
>>> sequence_list=["Trip_55_Seq_2", "Trip_197_Seq_1", "Trip_197_Seq_68"],
>>> excluded_classes=[""]).payload
>>> module = evaluate.load("SEA-AI/PanopticQuality")
>>> module.add_payload(payload, model_name=MODEL_FIELD[0])
>>> module.compute()
100%|ββββββββββ| 3/3 [00:03<00:00, 1.30s/it]
Added data ...
Start computing ...
Finished!
{'scores': {'MOTORBOAT': [0.18632257426639526,
0.698709617058436,
0.2666666805744171],
'FAR_AWAY_OBJECT': [0.0, 0.0, 0.0],
'SAILING_BOAT_WITH_CLOSED_SAILS': [0.0, 0.0, 0.0],
'SHIP': [0.3621737026917471, 0.684105846616957, 0.529411792755127],
'WATERCRAFT': [0.0, 0.0, 0.0],
'SPHERICAL_BUOY': [0.0, 0.0, 0.0],
'FLOTSAM': [0.0, 0.0, 0.0],
'SAILING_BOAT_WITH_OPEN_SAILS': [0.0, 0.0, 0.0],
'CONTAINER': [0.0, 0.0, 0.0],
'PILLAR_BUOY': [0.0, 0.0, 0.0],
'AERIAL_ANIMAL': [0.0, 0.0, 0.0],
'HUMAN_IN_WATER': [0.0, 0.0, 0.0],
'WOODEN_LOG': [0.0, 0.0, 0.0],
'MARITIME_ANIMAL': [0.0, 0.0, 0.0],
'WATER': [0.9397601008415222, 0.9397601008415222, 1.0],
'SKY': [0.9674496332804362, 0.9674496332804362, 1.0],
'LAND': [0.30757412078761204, 0.8304501533508301, 0.37037035822868347],
'CONSTRUCTION': [0.0, 0.0, 0.0],
'OWN_BOAT': [0.0, 0.0, 0.0],
'ALL': [0.14543579641409013, 0.21686712374464112, 0.16665520166095935]},
'numbers': {'MOTORBOAT': [6, 15, 18, 4.1922577023506165],
'FAR_AWAY_OBJECT': [0, 8, 9, 0.0],
'SAILING_BOAT_WITH_CLOSED_SAILS': [0, 2, 0, 0.0],
'SHIP': [9, 1, 15, 6.156952619552612],
'WATERCRAFT': [0, 9, 12, 0.0],
'SPHERICAL_BUOY': [0, 4, 22, 0.0],
'FLOTSAM': [0, 0, 1, 0.0],
'SAILING_BOAT_WITH_OPEN_SAILS': [0, 6, 0, 0.0],
'CONTAINER': [0, 0, 0, 0.0],
'PILLAR_BUOY': [0, 0, 9, 0.0],
'AERIAL_ANIMAL': [0, 0, 0, 0.0],
'HUMAN_IN_WATER': [0, 0, 0, 0.0],
'WOODEN_LOG': [0, 0, 0, 0.0],
'MARITIME_ANIMAL': [0, 0, 0, 0.0],
'WATER': [15, 0, 0, 14.096401512622833],
'SKY': [15, 0, 0, 14.511744499206543],
'LAND': [5, 9, 8, 4.15225076675415],
'CONSTRUCTION': [0, 0, 0, 0.0],
'OWN_BOAT': [0, 0, 8, 0.0],
'ALL': [50, 54, 102, 43.109607100486755]}}
```
## Metric Settings
The metric takes four optional input parameters: __label2id__, __stuff__, __per_class__ and __split_sq_rq__.
* `label2id: Dict[str, int]`: this dictionary is used to map string labels to an integer representation.
if not provided a default setting will be used:
`{'WATER': 0,
'SKY': 1,
'LAND': 2,
'MOTORBOAT': 3,
'FAR_AWAY_OBJECT': 4,
'SAILING_BOAT_WITH_CLOSED_SAILS': 5,
'SHIP': 6,
'WATERCRAFT': 7,
'SPHERICAL_BUOY': 8,
'CONSTRUCTION': 9,
'FLOTSAM': 10,
'SAILING_BOAT_WITH_OPEN_SAILS': 11,
'CONTAINER': 12,
'PILLAR_BUOY': 13,
'AERIAL_ANIMAL': 14,
'HUMAN_IN_WATER': 15,
'OWN_BOAT': 16,
'WOODEN_LOG': 17,
'MARITIME_ANIMAL': 18}
`
* `stuff: List[str]`: this list holds all string labels that belong to stuff.
if not provided a default setting will be used:
`
["WATER", "SKY", "LAND", "CONSTRUCTION", "ICE", "OWN_BOAT"]`
* `per_class: bool = True`: By default, the results are split up per class.
Setting this to False will aggregate the results:
* average the "scores"
* sum up the "numbers"
* `split_sq_rq: bool = True`: By default, the PQ-score is returned in three parts: the PQ score itself, and split into the segmentation quality (SQ) and recognition quality (RQ) part.
Setting this to False will return the PQ score only (PQ=RQ*SQ).
## Output Values
A dictionary containing the following keys:
* __scores__: This is a dictionary, that contains a key for each label, if `per_class == True`. Otherwise it only contains the key __all__.
For each key, it contains a list that holds the scores in the following order: PQ, SQ and RQ. If `split_sq_rq == False`, the list consists of PQ only.
* __numbers__: This is a dictionary, that contains a key for each label, if `per_class == True`. Otherwise it only contains the key __all__.
For each key, it contains a list that consists of four elements: TP, FP, FN and IOU:
* __TP__: number of true positive predictions
* __FP__: number of false positive predictions
* __FN__: number of false negative predictions
* __IOU__: sum of IOU of all TP predictions with ground truth
With all these values, it is possible to calculate the final scores.
## Further References
- **seametrics Library**: Explore the [seametrics GitHub repository](https://github.com/SEA-AI/seametrics/tree/main) for more details on the underlying library.
- **Torchmetrics**: https://lightning.ai/docs/torchmetrics/stable/detection/panoptic_quality.html
- **Understanding Metrics**: The Panoptic Segmentation task, as well as Panoptic Quality as the evaluation metric, were introduced [in this paper](https://arxiv.org/pdf/1801.00868.pdf).
## Contribution
Your contributions are welcome! If you'd like to improve SEA-AI/PanopticQuality or add new features, please feel free to fork the repository, make your changes, and submit a pull request. |