Datasets:
GEM
/

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
ratishsp commited on
Commit
d636180
1 Parent(s): a03c2c9

updates in description

Browse files
Files changed (1) hide show
  1. mlb_data_to_text.json +2 -2
mlb_data_to_text.json CHANGED
@@ -123,8 +123,8 @@
123
  "metrics": [
124
  "Other: Other Metrics"
125
  ],
126
- "original-evaluation": "We conducted two sets of human evaluation studies.\nIn our first study, we presented crowdworkers with sentences randomly selected from summaries along with their corresponding box score and play-by-play and asked them to count supported and contradicting facts. We did not require crowdworkers to be familiar with MLB. Instead, we provided a cheat sheet explaining the semantics of box score tables. In addition, we provided examples of sentences with supported/ contradicting facts. \nWe also conducted a second study to evaluate the quality of the generated summaries. We presented crowdworkers with a pair of summaries and asked them to choose the better one in terms of the three metrics:\n\u2022 Grammaticality (is the summary written in well-formed English?),\n\u2022 Coherence (is the summary well structured and well organized and does it have a natural ordering of the facts?) and\n\u2022 Conciseness (does the summary avoid unnecessary repetition including whole sentences, facts or phrases?).\nWe provided example summaries showcasing good and bad output. For this task, we required that the crowdworkers be able to comfortably comprehend NBA/ MLB game summaries. We elicited preferences with Best-Worst Scaling (Louviere and Wood-\nworth, 1991; Louviere et al., 2015), a method shown to be more reliable than rating scales. The score of a system is computed as the number of times it is rated best minus the number of times it is rated worst (Orme, 2009). The scores range from \u2212100\n(absolutely worst) to +100 (absolutely best).",
127
- "model-abilities": "Automatic evaluation measure can evaluate the factuality and the fluency of the model output. The factuality is measured using an Information Extraction based evaluation approach introduced by Wiseman et al (2017). The fluency is measured using BLEU."
128
  }
129
  },
130
  "considerations": {
 
123
  "metrics": [
124
  "Other: Other Metrics"
125
  ],
126
+ "original-evaluation": "We have reused the automatic metrics based on Information Extraction evaluation introduced by Wiseman et al (2017). For human evaluation, we conducted studies to evaluate the factuality, coherence, grammaticality and conciseness.",
127
+ "model-abilities": "Automatic evaluation measure can evaluate the factuality, content selection, content ordering and the fluency of the model output. The factuality, content selection and content ordering is measured using an Information Extraction based evaluation approach introduced by Wiseman et al (2017). The fluency is measured using BLEU."
128
  }
129
  },
130
  "considerations": {