File size: 3,338 Bytes
1c14bfe
 
9d444ef
 
 
 
 
1c14bfe
9d444ef
1c14bfe
 
e54639b
1c14bfe
 
9d444ef
 
 
 
 
 
 
 
 
c41e19f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d444ef
 
c41e19f
 
 
 
 
88af9ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c41e19f
 
 
 
 
9d444ef
 
c41e19f
9d444ef
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
title: PanopticQuality
tags:
- evaluate
- metric
description: >-
  PanopticQuality score
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
emoji: πŸ–ΌοΈ
---

# SEA-AI/PanopticQuality

This hugging face metric uses `seametrics.segmentation.PanopticQuality` under the hood to compute a panoptic quality score. It is a wrapper class for the torchmetrics class [`torchmetrics.detection.PanopticQuality`](https://lightning.ai/docs/torchmetrics/stable/detection/panoptic_quality.html). 

## Getting Started

To get started with PanopticQuality, make sure you have the necessary dependencies installed. This metric relies on the `evaluate`, `seametrics` and `seametrics[segmentation]`libraries for metric calculation and integration with FiftyOne datasets.

### Basic Usage
```python
>>> import evaluate
>>> from seametrics.fo_utils.utils import fo_to_payload
>>> MODEL_FIELD = ["maskformer-27k-100ep"]
>>> payload = fo_to_payload("SAILING_PANOPTIC_DATASET_QA",
>>>                         gt_field="ground_truth_det",
>>>                         models=MODEL_FIELD,
>>>                         sequence_list=["Trip_55_Seq_2", "Trip_197_Seq_1", "Trip_197_Seq_68"],
>>>                         excluded_classes=[""])
>>> module = evaluate.load("SEA-AI/PanopticQuality")
>>> module.add_payload(payload, model_name=MODEL_FIELD[0])
>>> module.compute()
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:03<00:00,  1.30s/it]
Added data ...
Start computing ...
Finished!
tensor(0.2082, dtype=torch.float64)
```

## Metric Settings
The metric takes two optional input parameters: __label2id__ and __stuff__.

* `label2id: Dict[str, int]`: this dictionary is used to map string labels to an integer representation.
    if not provided a default setting will be used:
        `{'WATER': 0,
        'SKY': 1,
        'LAND': 2,
        'MOTORBOAT': 3,
        'FAR_AWAY_OBJECT': 4,
        'SAILING_BOAT_WITH_CLOSED_SAILS': 5,
        'SHIP': 6,
        'WATERCRAFT': 7,
        'SPHERICAL_BUOY': 8,
        'CONSTRUCTION': 9,
        'FLOTSAM': 10,
        'SAILING_BOAT_WITH_OPEN_SAILS': 11,
        'CONTAINER': 12,
        'PILLAR_BUOY': 13,
        'AERIAL_ANIMAL': 14,
        'HUMAN_IN_WATER': 15,
        'OWN_BOAT': 16,
        'WOODEN_LOG': 17,
        'MARITIME_ANIMAL': 18}
        `
* `stuff: List[str]`: this list holds all string labels that belong to stuff.
    if not provided a default setting will be used:
        `
        ["WATER", "SKY", "LAND", "CONSTRUCTION", "ICE", "OWN_BOAT"]`

## Output Values
A single float number between 0 and 1 is returned, which represents the PQ score. The bigger the number the better the PQ score, and vice versa.

## Further References

- **seametrics Library**: Explore the [seametrics GitHub repository](https://github.com/SEA-AI/seametrics/tree/main) for more details on the underlying library.
- **Torchmetrics**: https://lightning.ai/docs/torchmetrics/stable/detection/panoptic_quality.html
- **Understanding Metrics**: The Panoptic Segmentation task, as well as Panoptic Quality as the evaluation metric, were introduced [in this paper](https://arxiv.org/pdf/1801.00868.pdf).

## Contribution

Your contributions are welcome! If you'd like to improve SEA-AI/PanopticQuality or add new features, please feel free to fork the repository, make your changes, and submit a pull request.