panoptic-quality / README.md
franzi2505's picture
Update README.md
3ff2f30 verified
|
raw
history blame
6.28 kB
metadata
title: panoptic-quality
tags:
  - evaluate
  - metric
description: PanopticQuality score
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
emoji: πŸ–ΌοΈ

SEA-AI/PanopticQuality

This hugging face metric uses seametrics.segmentation.PanopticQuality under the hood to compute a panoptic quality score. It is a wrapper class for the torchmetrics class torchmetrics.detection.PanopticQuality.

Getting Started

To get started with PanopticQuality, make sure you have the necessary dependencies installed. This metric relies on the evaluate, seametrics and seametrics[segmentation]libraries for metric calculation and integration with FiftyOne datasets.

Basic Usage

>>> import evaluate
>>> from seametrics.payload.processor import PayloadProcessor
>>> MODEL_FIELD = ["maskformer-27k-100ep"]
>>> payload = PayloadProcessor("SAILING_PANOPTIC_DATASET_QA",
>>>                         gt_field="ground_truth_det",
>>>                         models=MODEL_FIELD,
>>>                         sequence_list=["Trip_55_Seq_2", "Trip_197_Seq_1", "Trip_197_Seq_68"],
>>>                         excluded_classes=[""]).payload
>>> module = evaluate.load("SEA-AI/PanopticQuality")
>>> module.add_payload(payload, model_name=MODEL_FIELD[0])
>>> module.compute()
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:03<00:00,  1.30s/it]
Added data ...
Start computing ...
Finished!
{'scores': {'MOTORBOAT': [0.18632257426639526,
   0.698709617058436,
   0.2666666805744171],
  'FAR_AWAY_OBJECT': [0.0, 0.0, 0.0],
  'SAILING_BOAT_WITH_CLOSED_SAILS': [0.0, 0.0, 0.0],
  'SHIP': [0.3621737026917471, 0.684105846616957, 0.529411792755127],
  'WATERCRAFT': [0.0, 0.0, 0.0],
  'SPHERICAL_BUOY': [0.0, 0.0, 0.0],
  'FLOTSAM': [0.0, 0.0, 0.0],
  'SAILING_BOAT_WITH_OPEN_SAILS': [0.0, 0.0, 0.0],
  'CONTAINER': [0.0, 0.0, 0.0],
  'PILLAR_BUOY': [0.0, 0.0, 0.0],
  'AERIAL_ANIMAL': [0.0, 0.0, 0.0],
  'HUMAN_IN_WATER': [0.0, 0.0, 0.0],
  'WOODEN_LOG': [0.0, 0.0, 0.0],
  'MARITIME_ANIMAL': [0.0, 0.0, 0.0],
  'WATER': [0.9397601008415222, 0.9397601008415222, 1.0],
  'SKY': [0.9674496332804362, 0.9674496332804362, 1.0],
  'LAND': [0.30757412078761204, 0.8304501533508301, 0.37037035822868347],
  'CONSTRUCTION': [0.0, 0.0, 0.0],
  'OWN_BOAT': [0.0, 0.0, 0.0],
  'ALL': [0.14543579641409013, 0.21686712374464112, 0.16665520166095935]},
 'numbers': {'MOTORBOAT': [6, 15, 18, 4.1922577023506165],
  'FAR_AWAY_OBJECT': [0, 8, 9, 0.0],
  'SAILING_BOAT_WITH_CLOSED_SAILS': [0, 2, 0, 0.0],
  'SHIP': [9, 1, 15, 6.156952619552612],
  'WATERCRAFT': [0, 9, 12, 0.0],
  'SPHERICAL_BUOY': [0, 4, 22, 0.0],
  'FLOTSAM': [0, 0, 1, 0.0],
  'SAILING_BOAT_WITH_OPEN_SAILS': [0, 6, 0, 0.0],
  'CONTAINER': [0, 0, 0, 0.0],
  'PILLAR_BUOY': [0, 0, 9, 0.0],
  'AERIAL_ANIMAL': [0, 0, 0, 0.0],
  'HUMAN_IN_WATER': [0, 0, 0, 0.0],
  'WOODEN_LOG': [0, 0, 0, 0.0],
  'MARITIME_ANIMAL': [0, 0, 0, 0.0],
  'WATER': [15, 0, 0, 14.096401512622833],
  'SKY': [15, 0, 0, 14.511744499206543],
  'LAND': [5, 9, 8, 4.15225076675415],
  'CONSTRUCTION': [0, 0, 0, 0.0],
  'OWN_BOAT': [0, 0, 8, 0.0],
  'ALL': [50, 54, 102, 43.109607100486755]}}

Metric Settings

The metric takes four optional input parameters: label2id, stuff, per_class and split_sq_rq.

  • label2id: Dict[str, int]: this dictionary is used to map string labels to an integer representation. if not provided a default setting will be used: {'WATER': 0, 'SKY': 1, 'LAND': 2, 'MOTORBOAT': 3, 'FAR_AWAY_OBJECT': 4, 'SAILING_BOAT_WITH_CLOSED_SAILS': 5, 'SHIP': 6, 'WATERCRAFT': 7, 'SPHERICAL_BUOY': 8, 'CONSTRUCTION': 9, 'FLOTSAM': 10, 'SAILING_BOAT_WITH_OPEN_SAILS': 11, 'CONTAINER': 12, 'PILLAR_BUOY': 13, 'AERIAL_ANIMAL': 14, 'HUMAN_IN_WATER': 15, 'OWN_BOAT': 16, 'WOODEN_LOG': 17, 'MARITIME_ANIMAL': 18}

  • stuff: List[str]: this list holds all string labels that belong to stuff. if not provided a default setting will be used: ["WATER", "SKY", "LAND", "CONSTRUCTION", "ICE", "OWN_BOAT"]

  • per_class: bool = True: By default, the results are split up per class. Setting this to False will aggregate the results (average the scores, sum up the numbers; see below for explanation of scoress and numbers)

  • split_sq_rq: bool = True: By default, the PQ-score is returned in three parts: the PQ score itself, and split into the segmentation quality (SQ) and recognition quality (RQ) part. Setting this to False will return the PQ score only (PQ=RQ*SQ).

Output Values

A dictionary containing the following keys:

  • scores: This is a dictionary, that contains a key for each label, if per_class == True. Otherwise it only contains the key all. For each key, it contains a list that holds the scores in the following order: PQ, SQ and RQ. If split_sq_rq == False, the list consists of PQ only.
  • numbers: This is a dictionary, that contains a key for each label, if per_class == True. Otherwise it only contains the key all. For each key, it contains a list that consists of four elements: TP, FP, FN and IOU:
    • TP: number of true positive predictions
    • FP: number of false positive predictions
    • FN: number of false negative predictions
    • IOU: sum of IOU of all TP predictions with ground truth With all these values, it is possible to calculate the final scores.

Further References

Contribution

Your contributions are welcome! If you'd like to improve SEA-AI/PanopticQuality or add new features, please feel free to fork the repository, make your changes, and submit a pull request.