Spaces:
Sleeping
Sleeping
File size: 10,255 Bytes
1c14bfe 447a3ce 9d444ef 447a3ce 1c14bfe a9c2bf4 1c14bfe e54639b 1c14bfe 9d444ef c41e19f 5b34f1e c41e19f 5b34f1e c41e19f 5b34f1e 3b74bcf c41e19f 3b74bcf 8fddee8 c41e19f 9d444ef 64d9f33 c41e19f 88af9ca c41e19f 9d444ef dd20e99 94d4ad4 dd20e99 1a05115 a751c53 64d9f33 dd20e99 9d444ef dd20e99 3ff2f30 8671b89 3ff2f30 8671b89 e0e73ce c9750d3 8671b89 9d444ef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 |
---
title: panoptic-quality
tags:
- evaluate
- metric
description: PanopticQuality score
sdk: gradio
sdk_version: 4.44.1
app_file: app.py
pinned: false
emoji: πΌοΈ
---
# SEA-AI/PanopticQuality
This hugging face metric uses `seametrics.segmentation.PanopticQuality` under the hood to compute a panoptic quality score. It is a wrapper class for the torchmetrics class [`torchmetrics.detection.PanopticQuality`](https://lightning.ai/docs/torchmetrics/stable/detection/panoptic_quality.html).
## Getting Started
To get started with PanopticQuality, make sure you have the necessary dependencies installed. This metric relies on the `evaluate`, `seametrics` and `seametrics[segmentation]`libraries for metric calculation and integration with FiftyOne datasets.
### Basic Usage
```python
>>> import evaluate
>>> from seametrics.payload.processor import PayloadProcessor
>>> MODEL_FIELD = ["maskformer-27k-100ep"]
>>> payload = PayloadProcessor("SAILING_PANOPTIC_DATASET_QA",
>>> gt_field="ground_truth_det",
>>> models=MODEL_FIELD,
>>> sequence_list=["Trip_55_Seq_2", "Trip_197_Seq_1", "Trip_197_Seq_68"],
>>> excluded_classes=[""]).payload
>>> module = evaluate.load("SEA-AI/PanopticQuality", area_rng=[(0, 100),(100, 1e9)])
>>> module.add_payload(payload, model_name=MODEL_FIELD[0])
>>> module.compute()
100%|ββββββββββ| 3/3 [00:03<00:00, 1.30s/it]
Added data ...
Start computing ...
Finished!
{'scores': {'MOTORBOAT': array([[0. , 0.25889117],
[0. , 0.79029936],
[0. , 0.3275862 ]]),
'FAR_AWAY_OBJECT': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'SAILING_BOAT_WITH_CLOSED_SAILS': array([[0. , 0.35410052],
[0. , 0.75246359],
[0. , 0.47058824]]),
'SHIP': array([[0. , 0.47743301],
[0. , 0.90181785],
[0. , 0.52941179]]),
'WATERCRAFT': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'SPHERICAL_BUOY': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'FLOTSAM': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'SAILING_BOAT_WITH_OPEN_SAILS': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'CONTAINER': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'PILLAR_BUOY': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'AERIAL_ANIMAL': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'HUMAN_IN_WATER': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'WOODEN_LOG': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'MARITIME_ANIMAL': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'WATER': array([[0. , 0.96737861],
[0. , 0.96737861],
[0. , 1. ]]),
'SKY': array([[0. , 0.93018024],
[0. , 0.93018024],
[0. , 1. ]]),
'LAND': array([[0. , 0.53552331],
[0. , 0.84447907],
[0. , 0.63414633]]),
'CONSTRUCTION': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'OWN_BOAT': array([[0., 0.],
[0., 0.],
[0., 0.]]),
'ALL': array([[0. , 0.18544773],
[0. , 0.27297993],
[0. , 0.20851224]])},
'numbers': {'MOTORBOAT': array([[ 0. , 19. ],
[ 6. , 18. ],
[10. , 60. ],
[ 0. , 15.01568782]]),
'FAR_AWAY_OBJECT': array([[0., 0.],
[6., 6.],
[9., 0.],
[0., 0.]]),
'SAILING_BOAT_WITH_CLOSED_SAILS': array([[0. , 4. ],
[0. , 6. ],
[0. , 3. ],
[0. , 3.00985438]]),
'SHIP': array([[ 0. , 9. ],
[ 0. , 2. ],
[ 1. , 14. ],
[ 0. , 8.11636066]]),
'WATERCRAFT': array([[ 0., 0.],
[ 1., 9.],
[11., 1.],
[ 0., 0.]]),
'SPHERICAL_BUOY': array([[ 0., 0.],
[ 1., 3.],
[36., 0.],
[ 0., 0.]]),
'FLOTSAM': array([[0., 0.],
[0., 0.],
[7., 4.],
[0., 0.]]),
'SAILING_BOAT_WITH_OPEN_SAILS': array([[0., 0.],
[0., 5.],
[0., 0.],
[0., 0.]]),
'CONTAINER': array([[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]]),
'PILLAR_BUOY': array([[0., 0.],
[0., 0.],
[5., 3.],
[0., 0.]]),
'AERIAL_ANIMAL': array([[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]]),
'HUMAN_IN_WATER': array([[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]]),
'WOODEN_LOG': array([[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]]),
'MARITIME_ANIMAL': array([[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]]),
'WATER': array([[ 0. , 24. ],
[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 23.21708667]]),
'SKY': array([[ 0. , 24. ],
[ 0. , 0. ],
[ 0. , 0. ],
[ 0. , 22.32432568]]),
'LAND': array([[ 0. , 13. ],
[ 0. , 7. ],
[ 0. , 8. ],
[ 0. , 10.97822797]]),
'CONSTRUCTION': array([[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]]),
'OWN_BOAT': array([[ 0., 0.],
[ 0., 0.],
[ 0., 10.],
[ 0., 0.]]),
'ALL': array([[ 0. , 93. ],
[ 14. , 56. ],
[ 79. , 103. ],
[ 0. , 82.66154319]])}}
```
## Metric Settings
The metric takes four optional input parameters: __label2id__, __stuff__, __per_class__, __split_sq_rq__, __area_rng__, __class_agnostic__ and __method__.
* `label2id: Dict[str, int]`: this dictionary is used to map string labels to an integer representation.
if not provided a default setting will be used:
`{'WATER': 0,
'SKY': 1,
'LAND': 2,
'MOTORBOAT': 3,
'FAR_AWAY_OBJECT': 4,
'SAILING_BOAT_WITH_CLOSED_SAILS': 5,
'SHIP': 6,
'WATERCRAFT': 7,
'SPHERICAL_BUOY': 8,
'CONSTRUCTION': 9,
'FLOTSAM': 10,
'SAILING_BOAT_WITH_OPEN_SAILS': 11,
'CONTAINER': 12,
'PILLAR_BUOY': 13,
'AERIAL_ANIMAL': 14,
'HUMAN_IN_WATER': 15,
'OWN_BOAT': 16,
'WOODEN_LOG': 17,
'MARITIME_ANIMAL': 18}
`
* `stuff: List[str]`: this list holds all string labels that belong to stuff.
if not provided a default setting will be used:
`
["WATER", "SKY", "LAND", "CONSTRUCTION", "ICE", "OWN_BOAT"]`
* `per_class: bool = True`: By default, the results are split up per class.
Setting this to False will aggregate the results (average the _scores_, sum up the _numbers_; see below for explanation of _scores_ and _numbers_)
* `split_sq_rq: bool = True`: By default, the PQ-score is returned in three parts: the PQ score itself, and split into the segmentation quality (SQ) and recognition quality (RQ) part.
Setting this to False will return the PQ score only (PQ=RQ*SQ).
* `area_rng: List[Tuple[float]]`: The list holds all the area ranges for which results are calculated.
Each range is represented by a Tuple, where the first element is the lower limit and the second is the upper limit of the area range.
Each value represents total number of pixels of a mask.
The parameter defaults to [(0, 1e5 ** 2),(0 ** 2, 6 ** 2),(6 ** 2, 12 ** 2),(12 ** 2, 1e5 ** 2)].
* `class_agnostic: bool = False`: If true, all instance labels will be merged to a single instance class (class agnostic), while semantic classes are preserved.
* `method: Litera["iou", "hungarian"] = "hungarian"`: Controls the method used to match predictions to ground truths.
If "iou", then a prediction is matched with a ground truth if IOU > 0.5 (https://arxiv.org/pdf/1801.00868). Can lead to unintuitive results.
If "hungarian", then predictions are matched with a ground truth by an hungarian optimizer, which allows also matches with 0 < iou <= 0.5 (https://arxiv.org/abs/2309.04887).
Both methods result in a one-to-one mapping.
## Output Values
A dictionary containing the following keys:
* __scores__: This is a dictionary, that contains a key for each label, if `per_class == True`. Otherwise it only contains the key _all_.
For each key, it contains an array that holds the scores in the the rows in following order: PQ, SQ and RQ. If `split_sq_rq == False`, the rows consist of PQ only.
The number of columns corresponds to the given area ranges. That means, the results in each column are for a certain size of objects.
* __numbers__: This is a dictionary, that contains a key for each label, if `per_class == True`. Otherwise it only contains the key _all_.
For each key, it contains an array that consists of four elements in the rows: TP, FP, FN and IOU:
* __TP__: number of true positive predictions
* __FP__: number of false positive predictions
* __FN__: number of false negative predictions
* __IOU__: sum of IOU of all TP predictions with ground truth
With all these values, it is possible to calculate the final scores. As for the scores, the number of columns corresponds to the given area ranges. That means, the results in each column are for a certain size of objects.
## Further References
- **seametrics Library**: Explore the [seametrics GitHub repository](https://github.com/SEA-AI/seametrics/tree/main) for more details on the underlying library.
- **Torchmetrics**: https://lightning.ai/docs/torchmetrics/stable/detection/panoptic_quality.html
- **Understanding Metrics**: The Panoptic Segmentation task, as well as Panoptic Quality as the evaluation metric, were introduced [in this paper](https://arxiv.org/pdf/1801.00868.pdf).
## Contribution
Your contributions are welcome! If you'd like to improve SEA-AI/PanopticQuality or add new features, please feel free to fork the repository, make your changes, and submit a pull request. |