File size: 7,794 Bytes
d94c92e
 
 
 
 
 
 
 
 
 
 
 
 
475081a
d94c92e
 
 
f584d87
57da4b3
d94c92e
 
 
475081a
 
 
 
 
 
 
 
 
 
 
d94c92e
 
 
475081a
 
 
908df22
 
475081a
 
 
 
 
 
 
 
908df22
 
ad7ee1c
908df22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d94c92e
 
 
475081a
 
d94c92e
475081a
 
 
d94c92e
475081a
 
 
 
d94c92e
475081a
d94c92e
475081a
 
d94c92e
475081a
 
 
 
 
 
d94c92e
 
 
81b299b
 
 
 
 
 
d94c92e
 
 
475081a
 
d94c92e
 
 
 
 
 
 
 
 
 
f584d87
 
5b469a4
 
f584d87
 
475081a
d94c92e
475081a
d94c92e
f584d87
d94c92e
 
 
475081a
81b299b
57da4b3
17ea6af
4c644ed
57da4b3
d94c92e
 
475081a
5b469a4
 
 
 
 
f584d87
 
 
81b299b
f584d87
81b299b
57da4b3
17ea6af
 
4c644ed
17ea6af
3894697
 
f584d87
 
d94c92e
 
f584d87
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""ISCO-08 Hierarchical Accuracy Measure."""

import evaluate
import datasets
import ham
import isco


# TODO: Add BibTeX citation
_CITATION = """
@article{scikit-learn,
  title={Scikit-learn: Machine Learning in {P}ython},
  author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
         and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
         and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
         Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
  journal={Journal of Machine Learning Research},
  volume={12},
  pages={2825--2830},
  year={2011}
}
"""

_DESCRIPTION = """
The ISCO-08 Hierarchical Accuracy Measure is an implementation 
of the measure described in [Functional Annotation of Genes Using Hierarchical Text Categorization](https://www.researchgate.net/publication/44046343_Functional_Annotation_of_Genes_Using_Hierarchical_Text_Categorization) 
(Kiritchenko, Svetlana and Famili, Fazel. 2005) with the ISCO-08 taxonomy by the International Labour Organization.

1. The measure gives credit to partially correct classification, 
e.g. misclassification into node $I$ (ISCO unit group "1120") 
when the correct category is $G$ (ISCO unit group "1111") 
should be penalized less than misclassification into node $D$ 
(e.g., ISCO unit group "1211") since $I$ is in the same subgraph (ISCO sub-major group "11")
as $G$ and $D$ is not.
2. The measure punishes distant errors more heavily:
    1. the measure gives higher evaluation for correctly classifying one level down compared to staying at the parent node, e.g. classification into node $E$ (ISCO minor group "111") is better than classification into its parent $C$ (ISCO sub-major group "11") since $E$ is closer to the correct category $G$; 
    2. the measure gives lower evaluation for incorrectly classifying one level down comparing to staying at the parent node, e.g. classification into node $F$ (ISCO minor group "112") is worse than classification into its parent $C$ since $F$ is farther away from $G$.

The features described are accomplished by pairing hierarchical variants of precision ($hP$) and recall ($hR$) to form a hierarchical F1 ($hF_β$) score where each sample belongs not only to its class (e.g., a unit group level code), but also to all ancestors of the class in a hierarchical graph (i.e., the minor, sub-major, and major group level codes).

Hierarchical precision can be computed with:
$hP = \frac{| \v{C}_i ∩ \v{C}^′_i|}  {|\v{C}^′_i |} = \frac{1}{2}$

Hierarchical recall can be computed with:
$hR = \frac{| \v{C}_i ∩ \v{C}^′_i|}  {|\v{C}_i |} = \frac{1}{2}$

Combining the two values $hP$ and $hR$ into one hF-measure:
hF_β = \frac{(β^2 + 1) · hP · hR}{(β^2 · hP + hR)}, β ∈ [0, +∞)

Note:
**TP**: True positive
**TN**: True negative
**FP**: False positive
**FN**: False negative
"""

_KWARGS_DESCRIPTION = """
Calculates hierarchical precision, hierarchical recall and hierarchical F1 given a list of reference codes and predicted codes from the ISCO-08 taxonomy by the International Labour Organization.

Args:
    - references (List[str]): List of ISCO-08 reference codes. Each reference code should be a single token, 4-digit ISCO-08 code string.
    - predictions (List[str]): List of machine predicted or human assigned ISCO-08 codes to score. Each prediction should be a single token, 4-digit ISCO-08 code string.

Returns:
    - hierarchical_precision (`float` or `int`): Hierarchical precision score. Minimum possible value is 0. Maximum possible value is 1.0. A higher score means higher accuracy.
    - hierarchical_recall: Hierarchical recall score. Minimum possible value is 0. Maximum possible value is 1.0. A higher score means higher accuracy.
    - hierarchical_fmeasure: Hierarchical F1 score. Minimum possible value is 0. Maximum possible value is 1.0. A higher score means higher accuracy.

Examples:
    Example 1

    >>> hierarchical_accuracy_metric = evaluate.load("ham")
    >>> results = ham.compute(reference=["1111", "1112", "1113", "1114"], predictions=["1111", "1113", "1120", "1211"])
    >>> print(results)
    {
        'accuracy': 0.25,
        'hierarchical_precision': 0.7142857142857143,
        'hierarchical_recall': 0.5,
        'hierarchical_fmeasure': 0.588235294117647
    }
"""

# TODO: Define external resources urls if needed
ISCO_CSV_MIRROR_URL = (
    "https://storage.googleapis.com/isco-public/tables/ISCO_structure.csv"
)
ILO_ISCO_CSV_URL = (
    "https://www.ilo.org/ilostat-files/ISCO/newdocs-08-2021/ISCO-08/ISCO-08%20EN.csv"
)


@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
class ISCOHAM(evaluate.Metric):
    """The ISCO-08 Hierarchical Accuracy Measure"""

    def _info(self):
        # TODO: Specifies the evaluate.EvaluationModuleInfo object
        return evaluate.MetricInfo(
            # This is the description that will appear on the modules page.
            module_type="metric",
            description=_DESCRIPTION,
            citation=_CITATION,
            inputs_description=_KWARGS_DESCRIPTION,
            # This defines the format of each prediction and reference
            features=datasets.Features(
                {
                    "predictions": datasets.Value("string"),
                    "references": datasets.Value("string"),
                }
            ),
            # TODO: Homepage of the module for documentation
            homepage="http://module.homepage",
            # TODO: Additional links to the codebase or references
            codebase_urls=["http://github.com/path/to/codebase/of/new_module"],
            reference_urls=["http://path.to.reference.url/new_module"],
        )

    def _download_and_prepare(self, dl_manager):
        """Download external ISCO-08 csv file from the ILO website for creating the hierarchy dictionary."""
        isco_csv = dl_manager.download_and_extract(ISCO_CSV_MIRROR_URL)
        print(f"ISCO CSV file downloaded")
        self.isco_hierarchy = isco.create_hierarchy_dict(isco_csv)
        print("ISCO hierarchy dictionary created")
        print(self.isco_hierarchy)

    def _compute(self, predictions, references):
        """Returns the accuracy scores."""
        # Convert the inputs to strings
        predictions = [str(p) for p in predictions]
        references = [str(r) for r in references]

        # Calculate accuracy
        accuracy = sum(i == j for i, j in zip(predictions, references)) / len(
            predictions
        )
        print(f"Accuracy: {accuracy}")

        # Calculate hierarchical precision, recall and f-measure
        hierarchy = self.isco_hierarchy
        hP, hR = ham.calculate_hierarchical_precision_recall(
            references, predictions, hierarchy
        )
        hF = ham.hierarchical_f_measure(hP, hR)
        print(
            f"Hierarchical Precision: {hP}, Hierarchical Recall: {hR}, Hierarchical F-measure: {hF}"
        )

        return {
            "accuracy": accuracy,
            "hierarchical_precision": hP,
            "hierarchical_recall": hR,
            "hierarchical_fmeasure": hF,
        }