transZ commited on
Commit
e8bd09a
1 Parent(s): 3acc566

Testing upload

Browse files
Files changed (2) hide show
  1. README.md +21 -84
  2. test_parascore.py +68 -29
README.md CHANGED
@@ -11,117 +11,54 @@ tags:
11
  - evaluate
12
  - metric
13
  description: >-
14
- BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference
15
- sentences by cosine similarity.
16
- It has been shown to correlate with human judgment on sentence-level and system-level evaluation.
17
- Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language
18
- generation tasks.
19
 
20
- See the project's README at https://github.com/Tiiiger/bert_score#readme for more information.
21
  ---
22
 
23
- # Metric Card for BERT Score
24
 
25
  ## Metric description
26
 
27
- BERTScore is an automatic evaluation metric for text generation that computes a similarity score for each token in the candidate sentence with each token in the reference sentence. It leverages the pre-trained contextual embeddings from [BERT](https://huggingface.co/bert-base-uncased) models and matches words in candidate and reference sentences by cosine similarity.
28
-
29
- Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language generation tasks.
30
 
31
  ## How to use
32
 
33
- BERTScore takes 3 mandatory arguments : `predictions` (a list of string of candidate sentences), `references` (a list of strings or list of list of strings of reference sentences) and either `lang` (a string of two letters indicating the language of the sentences, in [ISO 639-1 format](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)) or `model_type` (a string specififying which model to use, according to the BERT specification). The default behavior of the metric is to use the suggested model for the target language when one is specified, otherwise to use the `model_type` indicated.
34
-
35
  ```python
36
  from evaluate import load
37
- bertscore = load("bertscore")
38
  predictions = ["hello there", "general kenobi"]
39
  references = ["hello there", "general kenobi"]
40
  results = bertscore.compute(predictions=predictions, references=references, lang="en")
41
  ```
42
 
43
- BERTScore also accepts multiple optional arguments:
44
-
45
-
46
- `num_layers` (int): The layer of representation to use. The default is the number of layers tuned on WMT16 correlation data, which depends on the `model_type` used.
47
-
48
- `verbose` (bool): Turn on intermediate status update. The default value is `False`.
49
-
50
- `idf` (bool or dict): Use idf weighting; can also be a precomputed idf_dict.
51
- `device` (str): On which the contextual embedding model will be allocated on. If this argument is `None`, the model lives on `cuda:0` if cuda is available.
52
- `nthreads` (int): Number of threads used for computation. The default value is `4`.
53
- `rescale_with_baseline` (bool): Rescale BERTScore with the pre-computed baseline. The default value is `False`.
54
- `batch_size` (int): BERTScore processing batch size, at least one of `model_type` or `lang`. `lang` needs to be specified when `rescale_with_baseline` is `True`.
55
- `baseline_path` (str): Customized baseline file.
56
-
57
- `use_fast_tokenizer` (bool): `use_fast` parameter passed to HF tokenizer. The default value is `False`.
58
-
59
-
60
  ## Output values
61
 
62
- BERTScore outputs a dictionary with the following values:
63
-
64
- `precision`: The [precision](https://huggingface.co/metrics/precision) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0.
65
-
66
- `recall`: The [recall](https://huggingface.co/metrics/recall) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0.
67
-
68
- `f1`: The [F1 score](https://huggingface.co/metrics/f1) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0.
69
-
70
- `hashcode:` The hashcode of the library.
71
-
72
-
73
- ### Values from popular papers
74
- The [original BERTScore paper](https://openreview.net/pdf?id=SkeHuCVFDr) reported average model selection accuracies (Hits@1) on WMT18 hybrid systems for different language pairs, which ranged from 0.004 for `en<->tr` to 0.824 for `en<->de`.
75
 
76
- For more recent model performance, see the [metric leaderboard](https://paperswithcode.com/paper/bertscore-evaluating-text-generation-with).
77
-
78
- ## Examples
79
-
80
- Maximal values with the `distilbert-base-uncased` model:
81
-
82
- ```python
83
- from evaluate import load
84
- bertscore = load("bertscore")
85
- predictions = ["hello world", "general kenobi"]
86
- references = ["hello world", "general kenobi"]
87
- results = bertscore.compute(predictions=predictions, references=references, model_type="distilbert-base-uncased")
88
- print(results)
89
- {'precision': [1.0, 1.0], 'recall': [1.0, 1.0], 'f1': [1.0, 1.0], 'hashcode': 'distilbert-base-uncased_L5_no-idf_version=0.3.10(hug_trans=4.10.3)'}
90
- ```
91
-
92
- Partial match with the `distilbert-base-uncased` model:
93
-
94
- ```python
95
- from evaluate import load
96
- bertscore = load("bertscore")
97
- predictions = ["hello world", "general kenobi"]
98
- references = ["goodnight moon", "the sun is shining"]
99
- results = bertscore.compute(predictions=predictions, references=references, model_type="distilbert-base-uncased")
100
- print(results)
101
- {'precision': [0.7380737066268921, 0.5584042072296143], 'recall': [0.7380737066268921, 0.5889028906822205], 'f1': [0.7380737066268921, 0.5732481479644775], 'hashcode': 'bert-base-uncased_L5_no-idf_version=0.3.10(hug_trans=4.10.3)'}
102
- ```
103
 
104
  ## Limitations and bias
105
 
106
- The [original BERTScore paper](https://openreview.net/pdf?id=SkeHuCVFDr) showed that BERTScore correlates well with human judgment on sentence-level and system-level evaluation, but this depends on the model and language pair selected.
107
-
108
- Furthermore, not all languages are supported by the metric -- see the [BERTScore supported language list](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) for more information.
109
-
110
- Finally, calculating the BERTScore metric involves downloading the BERT model that is used to compute the score-- the default model for `en`, `roberta-large`, takes over 1.4GB of storage space and downloading it can take a significant amount of time depending on the speed of your internet connection. If this is an issue, choose a smaller model; for instance `distilbert-base-uncased` is 268MB. A full list of compatible models can be found [here](https://docs.google.com/spreadsheets/d/1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI/edit#gid=0).
111
-
112
 
113
  ## Citation
114
 
115
  ```bibtex
116
- @inproceedings{bert-score,
117
- title={BERTScore: Evaluating Text Generation with BERT},
118
- author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi},
119
- booktitle={International Conference on Learning Representations},
120
- year={2020},
121
- url={https://openreview.net/forum?id=SkeHuCVFDr}
 
 
 
 
 
 
122
  }
123
  ```
124
 
125
  ## Further References
126
- - [BERTScore Project README](https://github.com/Tiiiger/bert_score#readme)
127
- - [BERTScore ICLR 2020 Poster Presentation](https://iclr.cc/virtual_2020/poster_SkeHuCVFDr.html)
 
11
  - evaluate
12
  - metric
13
  description: >-
14
+ ParaScore is a new metric to scoring the performance of paraphrase generation tasks
 
 
 
 
15
 
16
+ See the project at https://github.com/shadowkiller33/ParaScore for more information.
17
  ---
18
 
19
+ # Metric Card for ParaScore
20
 
21
  ## Metric description
22
 
23
+ ParaScore is a new metric to scoring the performance of paraphrase generation tasks
 
 
24
 
25
  ## How to use
26
 
 
 
27
  ```python
28
  from evaluate import load
29
+ bertscore = load("transZ/test_parascore")
30
  predictions = ["hello there", "general kenobi"]
31
  references = ["hello there", "general kenobi"]
32
  results = bertscore.compute(predictions=predictions, references=references, lang="en")
33
  ```
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ## Output values
36
 
37
+ ParaScore outputs a dictionary with the following values:
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
+ `score`: Range from 0.0 to 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ## Limitations and bias
42
 
43
+ The [original ParaScore paper](https://arxiv.org/abs/2202.08479) showed that ParaScore correlates well with human judgment on sentence-level and system-level evaluation, but this depends on the model and language pair selected.
 
 
 
 
 
44
 
45
  ## Citation
46
 
47
  ```bibtex
48
+ @article{Shen2022,
49
+ archivePrefix = {arXiv},
50
+ arxivId = {2202.08479},
51
+ author = {Shen, Lingfeng and Liu, Lemao and Jiang, Haiyun and Shi, Shuming},
52
+ journal = {EMNLP 2022 - 2022 Conference on Empirical Methods in Natural Language Processing, Proceedings},
53
+ eprint = {2202.08479},
54
+ month = {feb},
55
+ number = {1},
56
+ pages = {3178--3190},
57
+ title = {{On the Evaluation Metrics for Paraphrase Generation}},
58
+ url = {http://arxiv.org/abs/2202.08479},
59
+ year = {2022}
60
  }
61
  ```
62
 
63
  ## Further References
64
+ - [Offcial implementation](https://github.com/shadowkiller33/parascore_toolkit)
 
test_parascore.py CHANGED
@@ -15,54 +15,59 @@
15
 
16
  import evaluate
17
  import datasets
 
18
 
19
 
20
- # TODO: Add BibTeX citation
21
  _CITATION = """\
22
- @InProceedings{huggingface:module,
23
- title = {A great new module},
24
- authors={huggingface, Inc.},
25
- year={2020}
 
 
 
 
 
 
 
 
26
  }
27
  """
28
 
29
- # TODO: Add description of the module here
30
  _DESCRIPTION = """\
31
- This new module is designed to solve this great ML task and is crafted with a lot of care.
32
  """
33
 
34
 
35
  # TODO: Add description of the arguments of the module here
36
  _KWARGS_DESCRIPTION = """
37
- Calculates how good are predictions given some references, using certain scores
38
  Args:
39
  predictions: list of predictions to score. Each predictions
40
  should be a string with tokens separated by spaces.
41
  references: list of reference for each prediction. Each
42
  reference should be a string with tokens separated by spaces.
43
  Returns:
44
- accuracy: description of the first score,
45
- another_score: description of the second score,
46
  Examples:
47
  Examples should be written in doctest format, and should illustrate how
48
  to use the function.
49
 
50
- >>> my_new_module = evaluate.load("my_new_module")
51
- >>> results = my_new_module.compute(references=[0, 1], predictions=[0, 1])
52
  >>> print(results)
53
- {'accuracy': 1.0}
54
  """
55
 
56
  # TODO: Define external resources urls if needed
57
- BAD_WORDS_URL = "http://url/to/external/resource/bad_words.txt"
58
 
59
 
60
  @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
61
  class test_parascore(evaluate.Metric):
62
- """TODO: Short description of my evaluation module."""
63
 
64
  def _info(self):
65
- # TODO: Specifies the evaluate.EvaluationModuleInfo object
66
  return evaluate.MetricInfo(
67
  # This is the description that will appear on the modules page.
68
  module_type="metric",
@@ -70,26 +75,60 @@ class test_parascore(evaluate.Metric):
70
  citation=_CITATION,
71
  inputs_description=_KWARGS_DESCRIPTION,
72
  # This defines the format of each prediction and reference
73
- features=datasets.Features({
74
- 'predictions': datasets.Value('int64'),
75
- 'references': datasets.Value('int64'),
76
- }),
 
 
 
 
 
 
 
 
 
 
77
  # Homepage of the module for documentation
78
- homepage="http://module.homepage",
79
  # Additional links to the codebase or references
80
- codebase_urls=["http://github.com/path/to/codebase/of/new_module"],
81
- reference_urls=["http://path.to.reference.url/new_module"]
82
  )
83
 
84
  def _download_and_prepare(self, dl_manager):
85
  """Optional: download external resources useful to compute the scores"""
86
- # TODO: Download external resources if needed
87
- pass
88
 
89
- def _compute(self, predictions, references):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
  """Returns the scores"""
91
- # TODO: Compute the different scores of the module
92
- accuracy = sum(i == j for i, j in zip(predictions, references)) / len(predictions)
 
 
 
 
93
  return {
94
- "accuracy": accuracy,
95
  }
 
15
 
16
  import evaluate
17
  import datasets
18
+ import nltk
19
 
20
 
 
21
  _CITATION = """\
22
+ @article{Shen2022,
23
+ archivePrefix = {arXiv},
24
+ arxivId = {2202.08479},
25
+ author = {Shen, Lingfeng and Liu, Lemao and Jiang, Haiyun and Shi, Shuming},
26
+ journal = {EMNLP 2022 - 2022 Conference on Empirical Methods in Natural Language Processing, Proceedings},
27
+ eprint = {2202.08479},
28
+ month = {feb},
29
+ number = {1},
30
+ pages = {3178--3190},
31
+ title = {{On the Evaluation Metrics for Paraphrase Generation}},
32
+ url = {http://arxiv.org/abs/2202.08479},
33
+ year = {2022}
34
  }
35
  """
36
 
 
37
  _DESCRIPTION = """\
38
+ ParaScore is a new metric to scoring the performance of paraphrase generation tasks
39
  """
40
 
41
 
42
  # TODO: Add description of the arguments of the module here
43
  _KWARGS_DESCRIPTION = """
44
+ Calculates how good the paraphrase is
45
  Args:
46
  predictions: list of predictions to score. Each predictions
47
  should be a string with tokens separated by spaces.
48
  references: list of reference for each prediction. Each
49
  reference should be a string with tokens separated by spaces.
50
  Returns:
51
+ score: description of the first score,
 
52
  Examples:
53
  Examples should be written in doctest format, and should illustrate how
54
  to use the function.
55
 
56
+ >>> metrics = evaluate.load("transZ/test_parascore")
57
+ >>> results = my_new_module.compute(references=["They work for 6 months"], predictions=["They have working for 6 months"])
58
  >>> print(results)
59
+ {'score': 0.85}
60
  """
61
 
62
  # TODO: Define external resources urls if needed
63
+ BAD_WORDS_URL = "https://github.com/shadowkiller33/parascore_toolkit"
64
 
65
 
66
  @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
67
  class test_parascore(evaluate.Metric):
68
+ """ParaScore is a new metric to scoring the performance of paraphrase generation tasks"""
69
 
70
  def _info(self):
 
71
  return evaluate.MetricInfo(
72
  # This is the description that will appear on the modules page.
73
  module_type="metric",
 
75
  citation=_CITATION,
76
  inputs_description=_KWARGS_DESCRIPTION,
77
  # This defines the format of each prediction and reference
78
+ features=[
79
+ datasets.Features(
80
+ {
81
+ "predictions": datasets.Value("string", id="sequence"),
82
+ "references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"),
83
+ }
84
+ ),
85
+ datasets.Features(
86
+ {
87
+ "predictions": datasets.Value("string", id="sequence"),
88
+ "references": datasets.Value("string", id="sequence"),
89
+ }
90
+ ),
91
+ ],
92
  # Homepage of the module for documentation
93
+ homepage="https://github.com/shadowkiller33/ParaScore",
94
  # Additional links to the codebase or references
95
+ codebase_urls=["https://github.com/shadowkiller33/ParaScore"],
96
+ reference_urls=["https://github.com/shadowkiller33/ParaScore"]
97
  )
98
 
99
  def _download_and_prepare(self, dl_manager):
100
  """Optional: download external resources useful to compute the scores"""
101
+ self.score = evaluate.load('bertscore')
 
102
 
103
+ def _edit(self, x, y, lang='en'):
104
+ if lang == 'zh':
105
+ x = x.replace(" ", "")
106
+ y = y.replace(" ", "")
107
+ a = len(x)
108
+ b = len(y)
109
+ dis = nltk.edit_distance(x,y)
110
+ return dis/max(a,b)
111
+
112
+ def _diverse(self, cands, sources, lang='en'):
113
+ diversity = []
114
+ thresh = 0.35
115
+ for x, y in zip(cands, sources):
116
+ div = self._edit(x, y, lang)
117
+ if div >= thresh:
118
+ ss = thresh
119
+ elif div < thresh:
120
+ ss = -1 + ((thresh + 1) / thresh) * div
121
+ diversity.append(ss)
122
+ return diversity[0]
123
+
124
+ def _compute(self, predictions, references, lang='en'):
125
  """Returns the scores"""
126
+
127
+ score = self.score.compute(predictions=predictions, references=references, lang=lang)
128
+ bert_score = round(score['f1'], 2)
129
+ diversity = self._diverse(predictions, references, lang)
130
+
131
+ score = bert_score + 0.05 * diversity
132
  return {
133
+ "accuracy": score,
134
  }