griffin commited on
Commit
731b87e
1 Parent(s): 7620651

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -5
README.md CHANGED
@@ -89,11 +89,30 @@ Please refer to `load_chemistry()` in https://github.com/griff4692/calibrating-s
89
  ### Citation Information
90
 
91
  ```
92
- @article{adams2023desired,
93
- title={What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization},
94
- author={Adams, Griffin and Nguyen, Bichlien H and Smith, Jake and Xia, Yingce and Xie, Shufang and Ostropolets, Anna and Deb, Budhaditya and Chen, Yuan-Jyue and Naumann, Tristan and Elhadad, No{\'e}mie},
95
- journal={arXiv preprint arXiv:2305.07615},
96
- year={2023}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  }
98
  ```
99
 
 
89
  ### Citation Information
90
 
91
  ```
92
+ @inproceedings{adams-etal-2023-desired,
93
+ title = "What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization",
94
+ author = "Adams, Griffin and
95
+ Nguyen, Bichlien and
96
+ Smith, Jake and
97
+ Xia, Yingce and
98
+ Xie, Shufang and
99
+ Ostropolets, Anna and
100
+ Deb, Budhaditya and
101
+ Chen, Yuan-Jyue and
102
+ Naumann, Tristan and
103
+ Elhadad, No{\'e}mie",
104
+ editor = "Rogers, Anna and
105
+ Boyd-Graber, Jordan and
106
+ Okazaki, Naoaki",
107
+ booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
108
+ month = jul,
109
+ year = "2023",
110
+ address = "Toronto, Canada",
111
+ publisher = "Association for Computational Linguistics",
112
+ url = "https://aclanthology.org/2023.acl-long.587",
113
+ doi = "10.18653/v1/2023.acl-long.587",
114
+ pages = "10520--10542",
115
+ abstract = "Summarization models often generate text that is poorly calibrated to quality metrics because they are trained to maximize the likelihood of a single reference (MLE). To address this, recent work has added a calibration step, which exposes a model to its own ranked outputs to improve relevance or, in a separate line of work, contrasts positive and negative sets to improve faithfulness. While effective, much of this work has focused on \textit{how} to generate and optimize these sets. Less is known about \textit{why} one setup is more effective than another. In this work, we uncover the underlying characteristics of effective sets. For each training instance, we form a large, diverse pool of candidates and systematically vary the subsets used for calibration fine-tuning. Each selection strategy targets distinct aspects of the sets, such as lexical diversity or the size of the gap between positive and negatives. On three diverse scientific long-form summarization datasets (spanning biomedical, clinical, and chemical domains), we find, among others, that faithfulness calibration is optimal when the negative sets are extractive and more likely to be generated, whereas for relevance calibration, the metric margin between candidates should be maximized and surprise{--}the disagreement between model and metric defined candidate rankings{--}minimized.",
116
  }
117
  ```
118