NimaBoscarino commited on
Commit
0984348
1 Parent(s): 9b6f929

Add check for evaluation + metrics

Browse files
app.py CHANGED
@@ -8,6 +8,7 @@ from compliance_checks import (
8
  IntendedPurposeCheck,
9
  GeneralLimitationsCheck,
10
  ComputationalRequirementsCheck,
 
11
  )
12
 
13
  hf_writer = gr.HuggingFaceDatasetSaver(
@@ -23,6 +24,7 @@ checks = [
23
  IntendedPurposeCheck(),
24
  GeneralLimitationsCheck(),
25
  ComputationalRequirementsCheck(),
 
26
  ]
27
  suite = ComplianceSuite(checks=checks)
28
 
 
8
  IntendedPurposeCheck,
9
  GeneralLimitationsCheck,
10
  ComputationalRequirementsCheck,
11
+ EvaluationCheck,
12
  )
13
 
14
  hf_writer = gr.HuggingFaceDatasetSaver(
 
24
  IntendedPurposeCheck(),
25
  GeneralLimitationsCheck(),
26
  ComputationalRequirementsCheck(),
27
+ EvaluationCheck(),
28
  ]
29
  suite = ComplianceSuite(checks=checks)
30
 
bloom_card.py DELETED
@@ -1,147 +0,0 @@
1
- bloom_card = """\
2
- # Model Details
3
-
4
- BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. As such, it is able to output coherent text in 46 languages and 13 programming languages that is hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained for, by casting them as text generation tasks.
5
-
6
- ## Basics
7
- *This section provides information about the model type, version, license, funders, release date, developers, and contact information.*
8
- *It is useful for anyone who wants to reference the model.*
9
-
10
- **Developed by:** BigScience ([website](https://bigscience.huggingface.co))
11
-
12
- *All collaborators are either volunteers or have an agreement with their employer. (Further breakdown of participants forthcoming.)*
13
-
14
- **Model Type:** Transformer-based Language Model
15
-
16
- **Checkpoints format:** `transformers` (Megatron-DeepSpeed format available [here](https://huggingface.co/bigscience/bloom-optimizer-states))
17
-
18
- **Version:** 1.0.0
19
-
20
- **Languages:** Multiple; see [training data](#training-data)
21
-
22
- **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license) / [article and FAQ](https://bigscience.huggingface.co/blog/the-bigscience-rail-license))
23
-
24
- **Release Date Estimate:** Monday, 11.July.2022
25
-
26
- **Send Questions to:** [email protected]
27
-
28
- **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
29
-
30
- **Funded by:**
31
-
32
- * The French government.
33
-
34
- * Hugging Face ([website](https://huggingface.co)).
35
-
36
- * Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
37
-
38
- ## Intended Use
39
-
40
- This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
41
-
42
- ### Direct Use
43
-
44
- - Text generation
45
-
46
- - Exploring characteristics of language generated by a language model
47
-
48
- - Examples: Cloze tests, counterfactuals, generations with reframings
49
-
50
- ### Downstream Use
51
-
52
- - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
53
-
54
- ### Out-of-Scope Use
55
-
56
- Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but may not be correct.
57
-
58
- Out-of-scope Uses Include:
59
-
60
- - Usage in biomedical domains, political and legal domains, or finance domains
61
-
62
- - Usage for evaluating or scoring individuals, such as for employment, education, or credit
63
-
64
- - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
65
-
66
- #### Misuse
67
-
68
- Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
69
-
70
- - Spam generation
71
-
72
- - Disinformation and influence operations
73
-
74
- - Disparagement and defamation
75
-
76
- - Harassment and abuse
77
-
78
- - [Deception](#deception)
79
-
80
- - Unconsented impersonation and imitation
81
-
82
- - Unconsented surveillance
83
-
84
- - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
85
-
86
- ## Bias, Risks, and Limitations
87
- *This section identifies foreseeable harms and misunderstandings.*
88
-
89
- Model may:
90
-
91
- - Overrepresent some viewpoints and underrepresent others
92
-
93
- - Contain stereotypes
94
-
95
- - Contain [personal information](#personal-data-and-information)
96
-
97
- - Generate:
98
-
99
- - Hateful, abusive, or violent language
100
-
101
- - Discriminatory or prejudicial language
102
-
103
- - Content that may not be appropriate for all settings, including sexual content
104
-
105
- - Make errors, including producing incorrect information as if it were factual
106
-
107
- - Generate irrelevant or repetitive outputs
108
-
109
- - Induce users into attributing human traits to it, such as sentience or consciousness
110
-
111
- ## Technical Specifications
112
- *This section includes details about the model objective and architecture, and the compute infrastructure.*
113
- *It is useful for people interested in model development.*
114
-
115
- ### Compute infrastructure
116
- Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
117
-
118
- #### Hardware
119
-
120
- * 384 A100 80GB GPUs (48 nodes)
121
-
122
- * Additional 32 A100 80GB GPUs (4 nodes) in reserve
123
-
124
- * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
125
-
126
- * CPU: AMD
127
-
128
- * CPU memory: 512GB per node
129
-
130
- * GPU memory: 640GB per node
131
-
132
- * Inter-node connect: Omni-Path Architecture (OPA)
133
-
134
- * NCCL-communications network: a fully dedicated subnet
135
-
136
- * Disc IO network: shared network with other types of nodes
137
-
138
- #### Software
139
-
140
- * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
141
-
142
- * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
143
-
144
- * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
145
-
146
- * apex ([Github link](https://github.com/NVIDIA/apex))
147
- """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
compliance_checks/__init__.py CHANGED
@@ -13,4 +13,8 @@ from compliance_checks.general_limitations import (
13
 
14
  from compliance_checks.computational_requirements import (
15
  ComputationalRequirementsCheck, ComputationalRequirementsResult,
16
- )
 
 
 
 
 
13
 
14
  from compliance_checks.computational_requirements import (
15
  ComputationalRequirementsCheck, ComputationalRequirementsResult,
16
+ )
17
+
18
+ from compliance_checks.evaluation import (
19
+ EvaluationCheck, EvaluationResult,
20
+ )
compliance_checks/evaluation.py ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from compliance_checks.base import ComplianceResult, ComplianceCheck, walk_to_next_heading
2
+ from bs4 import BeautifulSoup
3
+
4
+
5
+ class EvaluationResult(ComplianceResult):
6
+ name = "Evaluation and Metrics"
7
+
8
+ def __init__(
9
+ self,
10
+ *args,
11
+ **kwargs,
12
+ ):
13
+ super().__init__(*args, **kwargs)
14
+
15
+ def __eq__(self, other):
16
+ if isinstance(other, EvaluationResult):
17
+ if super().__eq__(other):
18
+ try:
19
+ return True
20
+ except AssertionError:
21
+ return False
22
+ else:
23
+ return False
24
+
25
+ def to_string(self):
26
+ if self.status:
27
+ return """\
28
+ It looks like this model card has some documentation for how the model was evaluated! We look for this by \
29
+ searching for headings that say things like:
30
+ - Evaluation
31
+ - Evaluation results
32
+ - Benchmarks
33
+ - Results
34
+ """
35
+ else:
36
+ return """\
37
+ We weren't able to find a section in this model card that reports the evaluation process, but it's easy to \
38
+ add one! You can add the following section to the model card and, once you fill in the \
39
+ `[More Information Needed]` sections, the "Evaluation and Metrics" check should pass 🤗
40
+
41
+ ```md
42
+ ## Evaluation
43
+
44
+ <!-- This section describes the evaluation protocols and provides the results. -->
45
+
46
+ ### Testing Data, Factors & Metrics
47
+
48
+ #### Testing Data
49
+
50
+ <!-- This should link to a Data Card if possible. -->
51
+
52
+ [More Information Needed]
53
+
54
+ #### Factors
55
+
56
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
57
+
58
+ [More Information Needed]
59
+
60
+ #### Metrics
61
+
62
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
63
+
64
+ [More Information Needed]
65
+
66
+ ### Results
67
+
68
+ [More Information Needed]
69
+
70
+ #### Summary
71
+
72
+ [More Information Needed]
73
+ ```
74
+ """
75
+
76
+
77
+ class EvaluationCheck(ComplianceCheck):
78
+ name = "Evaluation and Metrics"
79
+
80
+ def run_check(self, card: BeautifulSoup):
81
+ combos = [
82
+ ("h1", "Evaluation"), ("h2", "Evaluation"),
83
+ ("h2", "Evaluation results"), ("h2", "Evaluation Results"),
84
+ ("h2", "Benchmarks"),
85
+ ("h2", "Results"),
86
+ ]
87
+
88
+ for hX, heading in combos:
89
+ purpose_check = walk_to_next_heading(card, hX, heading)
90
+ if purpose_check:
91
+ return EvaluationResult(
92
+ status=True,
93
+ )
94
+
95
+ return EvaluationResult()
tests/conftest.py CHANGED
@@ -2,45 +2,42 @@ from os import listdir
2
  from os.path import isfile, join
3
  from pathlib import Path
4
 
5
-
6
- # TODO: I have the option of maybe making a check for accuracy/metrics?
7
-
8
  # Note, some of these are marked as FALSE instead of TRUE because the
9
  # information is hidden somewhere non-standard, e.g. described in prose
10
 
11
  # Intended Purpose, General Limitations, Computational Requirements
12
  expected_check_results = {
13
- "albert-base-v2": [True, True, False],
14
- "bert-base-cased": [True, True, False],
15
- "bert-base-multilingual-cased": [True, True, False],
16
- "bert-base-uncased": [True, True, False],
17
- "big-science___bloom": [True, True, True],
18
- "cl-tohoku___bert-base-japanese-whole-word-masking": [False, False, False],
19
- "distilbert-base-cased-distilled-squad": [True, True, True],
20
- "distilbert-base-uncased": [True, True, False],
21
- "distilbert-base-uncased-finetuned-sst-2-english": [True, True, False],
22
- "distilroberta-base": [True, True, False],
23
- "emilyalsentzer___Bio_ClinicalBERT": [False, False, False],
24
- "facebook___bart-large-mnli": [False, False, False],
25
- "google___electra-base-discriminator": [False, False, False],
26
- "gpt2": [True, True, False],
27
- "Helsinki-NLP___opus-mt-en-es": [False, False, False],
28
- "jonatasgrosman___wav2vec2-large-xlsr-53-english": [False, False, False],
29
- "microsoft___layoutlmv3-base": [False, False, False],
30
- "openai___clip-vit-base-patch32": [True, True, False],
31
- "openai___clip-vit-large-patch14": [True, True, False],
32
- "philschmid___bart-large-cnn-samsum": [False, False, False],
33
- "prajjwal1___bert-tiny": [False, False, False],
34
- "roberta-base": [True, True, False],
35
- "roberta-large": [True, True, False],
36
- "runwayml___stable-diffusion-v1-5": [True, True, False],
37
- "sentence-transformers___all-MiniLM-L6-v2": [True, False, False],
38
- "StanfordAIMI___stanford-deidentifier-base": [False, False, False],
39
- "t5-base": [True, False, False],
40
- "t5-small": [True, False, False],
41
- "xlm-roberta-base": [True, True, False],
42
- "xlm-roberta-large": [True, True, False],
43
- "yiyanghkust___finbert-tone": [False, False, False],
44
  }
45
 
46
 
@@ -49,18 +46,9 @@ def pytest_generate_tests(metafunc):
49
  files = [f"cards/{f}" for f in listdir("cards") if isfile(join("cards", f))]
50
  cards = [Path(f).read_text() for f in files]
51
  model_ids = [f.replace("cards/", "").replace(".md", "") for f in files]
52
-
53
- # TODO: IMPORTANT – remove the default [False, False, False]
54
- expected_results = [expected_check_results.get(m, [False, False, False]) for m, c in zip(model_ids, cards)]
55
 
56
  metafunc.parametrize(
57
  ["real_model_card", "expected_check_results"],
58
  list(map(list, zip(cards, expected_results)))
59
  )
60
-
61
- # rows = read_csvrows()
62
- # if 'row' in metafunc.fixturenames:
63
- # metafunc.parametrize('row', rows)
64
- # if 'col' in metafunc.fixturenames:
65
- # metafunc.parametrize('col', list(itertools.chain(*rows)))
66
-
 
2
  from os.path import isfile, join
3
  from pathlib import Path
4
 
 
 
 
5
  # Note, some of these are marked as FALSE instead of TRUE because the
6
  # information is hidden somewhere non-standard, e.g. described in prose
7
 
8
  # Intended Purpose, General Limitations, Computational Requirements
9
  expected_check_results = {
10
+ "albert-base-v2": [True, True, False, True],
11
+ "bert-base-cased": [True, True, False, True],
12
+ "bert-base-multilingual-cased": [True, True, False, False],
13
+ "bert-base-uncased": [True, True, False, True],
14
+ "big-science___bloom": [True, True, True, True],
15
+ "cl-tohoku___bert-base-japanese-whole-word-masking": [False, False, False, False],
16
+ "distilbert-base-cased-distilled-squad": [True, True, True, True],
17
+ "distilbert-base-uncased": [True, True, False, True],
18
+ "distilbert-base-uncased-finetuned-sst-2-english": [True, True, False, False],
19
+ "distilroberta-base": [True, True, False, True],
20
+ "emilyalsentzer___Bio_ClinicalBERT": [False, False, False, False],
21
+ "facebook___bart-large-mnli": [False, False, False, False],
22
+ "google___electra-base-discriminator": [False, False, False, False],
23
+ "gpt2": [True, True, False, True],
24
+ "Helsinki-NLP___opus-mt-en-es": [False, False, False, True],
25
+ "jonatasgrosman___wav2vec2-large-xlsr-53-english": [False, False, False, True],
26
+ "microsoft___layoutlmv3-base": [False, False, False, False],
27
+ "openai___clip-vit-base-patch32": [True, True, False, False],
28
+ "openai___clip-vit-large-patch14": [True, True, False, False],
29
+ "philschmid___bart-large-cnn-samsum": [False, False, False, True],
30
+ "prajjwal1___bert-tiny": [False, False, False, False],
31
+ "roberta-base": [True, True, False, True],
32
+ "roberta-large": [True, True, False, True],
33
+ "runwayml___stable-diffusion-v1-5": [True, True, False, True],
34
+ "sentence-transformers___all-MiniLM-L6-v2": [True, False, False, True],
35
+ "StanfordAIMI___stanford-deidentifier-base": [False, False, False, False],
36
+ "t5-base": [True, False, False, True],
37
+ "t5-small": [True, False, False, True],
38
+ "xlm-roberta-base": [True, True, False, False],
39
+ "xlm-roberta-large": [True, True, False, False],
40
+ "yiyanghkust___finbert-tone": [False, False, False, False],
41
  }
42
 
43
 
 
46
  files = [f"cards/{f}" for f in listdir("cards") if isfile(join("cards", f))]
47
  cards = [Path(f).read_text() for f in files]
48
  model_ids = [f.replace("cards/", "").replace(".md", "") for f in files]
49
+ expected_results = [expected_check_results.get(m) for m, c in zip(model_ids, cards)]
 
 
50
 
51
  metafunc.parametrize(
52
  ["real_model_card", "expected_check_results"],
53
  list(map(list, zip(cards, expected_results)))
54
  )
 
 
 
 
 
 
 
tests/test_compliance_checks.py CHANGED
@@ -6,6 +6,7 @@ from compliance_checks import (
6
  IntendedPurposeCheck,
7
  GeneralLimitationsCheck,
8
  ComputationalRequirementsCheck,
 
9
  )
10
 
11
 
@@ -60,7 +61,8 @@ def test_end_to_end_compliance_suite(real_model_card, expected_check_results):
60
  suite = ComplianceSuite(checks=[
61
  IntendedPurposeCheck(),
62
  GeneralLimitationsCheck(),
63
- ComputationalRequirementsCheck()
 
64
  ])
65
 
66
  results = suite.run(real_model_card)
 
6
  IntendedPurposeCheck,
7
  GeneralLimitationsCheck,
8
  ComputationalRequirementsCheck,
9
+ EvaluationCheck,
10
  )
11
 
12
 
 
61
  suite = ComplianceSuite(checks=[
62
  IntendedPurposeCheck(),
63
  GeneralLimitationsCheck(),
64
+ ComputationalRequirementsCheck(),
65
+ EvaluationCheck(),
66
  ])
67
 
68
  results = suite.run(real_model_card)
tests/test_evaluation_check.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pytest
2
+
3
+ import markdown
4
+ from bs4 import BeautifulSoup
5
+ from compliance_checks.evaluation import (
6
+ EvaluationCheck, EvaluationResult,
7
+ )
8
+
9
+ empty_template = """\
10
+ ## Evaluation
11
+
12
+ <!-- This section describes the evaluation protocols and provides the results. -->
13
+
14
+ ### Testing Data, Factors & Metrics
15
+
16
+ #### Testing Data
17
+
18
+ <!-- This should link to a Data Card if possible. -->
19
+
20
+ [More Information Needed]
21
+
22
+ #### Factors
23
+
24
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
25
+
26
+ [More Information Needed]
27
+
28
+ #### Metrics
29
+
30
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
31
+
32
+ [More Information Needed]
33
+
34
+ ### Results
35
+
36
+ [More Information Needed]
37
+
38
+ #### Summary
39
+
40
+ """
41
+ model_card_template = """\
42
+ ## Evaluation
43
+
44
+ Some info...
45
+
46
+ ### Testing Data, Factors & Metrics
47
+
48
+ #### Testing Data
49
+
50
+ Some information here
51
+
52
+ #### Factors
53
+
54
+ Etc...
55
+
56
+ #### Metrics
57
+
58
+ There are some metrics listed out here
59
+
60
+ ### Results
61
+
62
+ And some results
63
+
64
+ #### Summary
65
+
66
+ Summarizing everything up!
67
+ """
68
+ albert = """\
69
+ # ALBERT Base v2
70
+
71
+ ## Evaluation results
72
+
73
+ When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
74
+ """
75
+ helsinki = """\
76
+ ### eng-spa
77
+
78
+ ## Benchmarks
79
+
80
+ | testset | BLEU | chr-F |
81
+ |-----------------------|-------|-------|
82
+ | newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 |
83
+ | news-test2008-engspa.eng.spa | 29.7 | 0.564 |
84
+ | newstest2009-engspa.eng.spa | 30.2 | 0.578 |
85
+ | newstest2010-engspa.eng.spa | 36.9 | 0.620 |
86
+ | newstest2011-engspa.eng.spa | 38.2 | 0.619 |
87
+ | newstest2012-engspa.eng.spa | 39.0 | 0.625 |
88
+ | newstest2013-engspa.eng.spa | 35.0 | 0.598 |
89
+ | Tatoeba-test.eng.spa | 54.9 | 0.721 |
90
+ """
91
+ phil = """\
92
+ ## Results
93
+
94
+ | key | value |
95
+ | --- | ----- |
96
+ | eval_rouge1 | 42.621 |
97
+ | eval_rouge2 | 21.9825 |
98
+ | eval_rougeL | 33.034 |
99
+ | eval_rougeLsum | 39.6783 |
100
+ """
101
+ runway = """\
102
+ ## Evaluation Results
103
+ Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
104
+ """
105
+
106
+ success_result = EvaluationResult(
107
+ status=True
108
+ )
109
+
110
+
111
+ @pytest.mark.parametrize("card", [
112
+ model_card_template,
113
+ albert,
114
+ helsinki,
115
+ phil,
116
+ runway,
117
+ ])
118
+ def test_run_checks(card):
119
+ model_card_html = markdown.markdown(card)
120
+ card_soup = BeautifulSoup(model_card_html, features="html.parser")
121
+
122
+ results = EvaluationCheck().run_check(card_soup)
123
+
124
+ assert results == success_result
125
+
126
+
127
+ def test_fail_on_empty_template():
128
+ model_card_html = markdown.markdown(empty_template)
129
+ card_soup = BeautifulSoup(model_card_html, features="html.parser")
130
+ results = EvaluationCheck().run_check(card_soup)
131
+ assert results == EvaluationResult()