franzi2505 commited on
Commit
3b74bcf
β€’
1 Parent(s): dd0aa4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -46
README.md CHANGED
@@ -29,55 +29,60 @@ To get started with PanopticQuality, make sure you have the necessary dependenci
29
  >>> models=MODEL_FIELD,
30
  >>> sequence_list=["Trip_55_Seq_2", "Trip_197_Seq_1", "Trip_197_Seq_68"],
31
  >>> excluded_classes=[""]).payload
32
- >>> module = evaluate.load("SEA-AI/PanopticQuality")
33
  >>> module.add_payload(payload, model_name=MODEL_FIELD[0])
34
  >>> module.compute()
35
  100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:03<00:00, 1.30s/it]
36
  Added data ...
37
  Start computing ...
38
  Finished!
39
- {'scores': {'MOTORBOAT': [0.18632257426639526,
40
- 0.698709617058436,
41
- 0.2666666805744171],
42
- 'FAR_AWAY_OBJECT': [0.0, 0.0, 0.0],
43
- 'SAILING_BOAT_WITH_CLOSED_SAILS': [0.0, 0.0, 0.0],
44
- 'SHIP': [0.3621737026917471, 0.684105846616957, 0.529411792755127],
45
- 'WATERCRAFT': [0.0, 0.0, 0.0],
46
- 'SPHERICAL_BUOY': [0.0, 0.0, 0.0],
47
- 'FLOTSAM': [0.0, 0.0, 0.0],
48
- 'SAILING_BOAT_WITH_OPEN_SAILS': [0.0, 0.0, 0.0],
49
- 'CONTAINER': [0.0, 0.0, 0.0],
50
- 'PILLAR_BUOY': [0.0, 0.0, 0.0],
51
- 'AERIAL_ANIMAL': [0.0, 0.0, 0.0],
52
- 'HUMAN_IN_WATER': [0.0, 0.0, 0.0],
53
- 'WOODEN_LOG': [0.0, 0.0, 0.0],
54
- 'MARITIME_ANIMAL': [0.0, 0.0, 0.0],
55
- 'WATER': [0.9397601008415222, 0.9397601008415222, 1.0],
56
- 'SKY': [0.9674496332804362, 0.9674496332804362, 1.0],
57
- 'LAND': [0.30757412078761204, 0.8304501533508301, 0.37037035822868347],
58
- 'CONSTRUCTION': [0.0, 0.0, 0.0],
59
- 'OWN_BOAT': [0.0, 0.0, 0.0],
60
- 'ALL': [0.14543579641409013, 0.21686712374464112, 0.16665520166095935]},
61
- 'numbers': {'MOTORBOAT': [6, 15, 18, 4.1922577023506165],
62
- 'FAR_AWAY_OBJECT': [0, 8, 9, 0.0],
63
- 'SAILING_BOAT_WITH_CLOSED_SAILS': [0, 2, 0, 0.0],
64
- 'SHIP': [9, 1, 15, 6.156952619552612],
65
- 'WATERCRAFT': [0, 9, 12, 0.0],
66
- 'SPHERICAL_BUOY': [0, 4, 22, 0.0],
67
- 'FLOTSAM': [0, 0, 1, 0.0],
68
- 'SAILING_BOAT_WITH_OPEN_SAILS': [0, 6, 0, 0.0],
69
- 'CONTAINER': [0, 0, 0, 0.0],
70
- 'PILLAR_BUOY': [0, 0, 9, 0.0],
71
- 'AERIAL_ANIMAL': [0, 0, 0, 0.0],
72
- 'HUMAN_IN_WATER': [0, 0, 0, 0.0],
73
- 'WOODEN_LOG': [0, 0, 0, 0.0],
74
- 'MARITIME_ANIMAL': [0, 0, 0, 0.0],
75
- 'WATER': [15, 0, 0, 14.096401512622833],
76
- 'SKY': [15, 0, 0, 14.511744499206543],
77
- 'LAND': [5, 9, 8, 4.15225076675415],
78
- 'CONSTRUCTION': [0, 0, 0, 0.0],
79
- 'OWN_BOAT': [0, 0, 8, 0.0],
80
- 'ALL': [50, 54, 102, 43.109607100486755]}}
 
 
 
 
 
81
  ```
82
 
83
  ## Metric Settings
@@ -118,15 +123,16 @@ The metric takes four optional input parameters: __label2id__, __stuff__, __per_
118
  ## Output Values
119
  A dictionary containing the following keys:
120
  * __scores__: This is a dictionary, that contains a key for each label, if `per_class == True`. Otherwise it only contains the key _all_.
121
- For each key, it contains a list that holds the scores in the following order: PQ, SQ and RQ. If `split_sq_rq == False`, the list consists of PQ only.
 
122
  * __numbers__: This is a dictionary, that contains a key for each label, if `per_class == True`. Otherwise it only contains the key _all_.
123
- For each key, it contains a list that consists of four elements: TP, FP, FN and IOU:
124
  * __TP__: number of true positive predictions
125
  * __FP__: number of false positive predictions
126
  * __FN__: number of false negative predictions
127
  * __IOU__: sum of IOU of all TP predictions with ground truth
128
 
129
- With all these values, it is possible to calculate the final scores.
130
 
131
  ## Further References
132
 
 
29
  >>> models=MODEL_FIELD,
30
  >>> sequence_list=["Trip_55_Seq_2", "Trip_197_Seq_1", "Trip_197_Seq_68"],
31
  >>> excluded_classes=[""]).payload
32
+ >>> module = evaluate.load("SEA-AI/PanopticQuality", area_rng=[(0, 100),(100, 1e9)])
33
  >>> module.add_payload(payload, model_name=MODEL_FIELD[0])
34
  >>> module.compute()
35
  100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:03<00:00, 1.30s/it]
36
  Added data ...
37
  Start computing ...
38
  Finished!
39
+ {'scores': {'MOTORBOAT': array([[0. , 0.25889117],
40
+ [0. , 0.79029936],
41
+ [0. , 0.3275862 ]]),
42
+ 'FAR_AWAY_OBJECT': array([[0., 0.],
43
+ [0., 0.],
44
+ [0., 0.]]),
45
+ 'SAILING_BOAT_WITH_CLOSED_SAILS': array([[0. , 0.35410052],
46
+ [0. , 0.75246359],
47
+ [0. , 0.47058824]]),
48
+ 'SHIP': array([[0. , 0.47743301],
49
+ [0. , 0.90181785],
50
+ [0. , 0.52941179]]),
51
+ 'WATERCRAFT': array([[0., 0.],
52
+ [0., 0.],
53
+ [0., 0.]]),
54
+ 'SPHERICAL_BUOY': array([[0., 0.],
55
+ [0., 0.],
56
+ [0., 0.]]),
57
+ 'FLOTSAM': array([[0., 0.],
58
+ [0., 0.],
59
+ [0., 0.]]),
60
+ 'SAILING_BOAT_WITH_OPEN_SAILS': array([[0., 0.],
61
+ [0., 0.],
62
+ [0., 0.]]),
63
+ 'CONTAINER': array([[0., 0.],
64
+ [0., 0.],
65
+ [0., 0.]]),
66
+ 'PILLAR_BUOY': array([[0., 0.],
67
+ [0., 0.],
68
+ [0., 0.]]),
69
+ 'AERIAL_ANIMAL': array([[0., 0.],
70
+ [0., 0.],
71
+ [0., 0.]]),
72
+ 'HUMAN_IN_WATER': array([[0., 0.],
73
+ [0., 0.],
74
+ [0., 0.]]),
75
+ 'WOODEN_LOG': array([[0., 0.],
76
+ [0., 0.],
77
+ [0., 0.]]),
78
+ 'MARITIME_ANIMAL': array([[0., 0.],
79
+ [0., 0.],
80
+ [0., 0.]]),
81
+ 'WATER': array([[0. , 0.96737861],
82
+ [0. , 0.96737861],
83
+ [0. , 1. ]]),
84
+ 'SKY': array([[0. , 0.93018024],
85
+ [0.
86
  ```
87
 
88
  ## Metric Settings
 
123
  ## Output Values
124
  A dictionary containing the following keys:
125
  * __scores__: This is a dictionary, that contains a key for each label, if `per_class == True`. Otherwise it only contains the key _all_.
126
+ For each key, it contains an array that holds the scores in the the columns in following order: PQ, SQ and RQ. If `split_sq_rq == False`, the columns consist of PQ only.
127
+ The number of rows corresponds to the given area ranges. That means, the results in each row are for a certain size of objects.
128
  * __numbers__: This is a dictionary, that contains a key for each label, if `per_class == True`. Otherwise it only contains the key _all_.
129
+ For each key, it contains an array that consists of four elements in the columns: TP, FP, FN and IOU:
130
  * __TP__: number of true positive predictions
131
  * __FP__: number of false positive predictions
132
  * __FN__: number of false negative predictions
133
  * __IOU__: sum of IOU of all TP predictions with ground truth
134
 
135
+ With all these values, it is possible to calculate the final scores. As for the scores, the number of rows corresponds to the given area ranges. That means, the results in each row are for a certain size of objects.
136
 
137
  ## Further References
138