--- task_categories: - summarization language: - en tags: - chemistry - biology - medical pretty_name: Generating Abstracts of Academic Chemistry Papers size_categories: - 100K - **Paper:** [What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization ](https://arxiv.org/abs/2305.07615) - **Journal:** ACL 2023 - **Point of Contact:** griffin.adams@columbia.edu - **Repository:** https://github.com/griff4692/calibrating-summaries ### ChemSum Summary We introduce a dataset with a pure chemistry focus by compiling a list of chemistry academic journals with Open-Access articles. For each journal, we downloaded full-text article PDFs from the Open-Access portion of the journal using available APIs, or scraping this content using [Selenium Chrome WebDriver](https://www.selenium.dev/documentation/webdriver/). Each PDF was processed with Grobid via a locally installed [client](https://pypi.org/project/grobid-client-python/) to extract free-text paragraphs with sections. The table below shows the journals from which Open Access articles were sourced, as well as the number of papers processed. For all journals, we filtered for papers with the provided topic of Chemistry when papers from other disciplines were also available (e.g. PubMed). | Source | # of Articles | | ----------- | ----------- | | Beilstein | 1,829 | | Chem Cell | 546 | | ChemRxiv | 12,231 | | Chemistry Open | 398 | | Nature Communications Chemistry | 572 | | PubMed Author Manuscript | 57,680 | | PubMed Open Access | 29,540 | | Royal Society of Chemistry (RSC) | 9,334 | | Scientific Reports - Nature | 6,826 | ### Languages English ## Dataset Structure ### Data Fields | Column | Description | | ----------- | ----------- | | `uuid` | Unique Identifier for the Example | | `title` | Title of the Article | | `article_source` | Open Source Journal (see above for list) | | `abstract` | Abstract (summary reference) | | `sections` | Full-text sections from the main body of paper ( indicates section boundaries)| | `headers` | Corresponding section headers for `sections` field ( delimited) | | `source_toks` | Aggregate number of tokens across `sections` | | `target_toks` | Number of tokens in the `abstract` | | `compression` | Ratio of `source_toks` to `target_toks` | Please refer to `load_chemistry()` in https://github.com/griff4692/calibrating-summaries/blob/master/preprocess/preprocess.py for pre-processing as a summarization dataset. The inputs are `sections` and `headers` and the targets is the `abstract`. ### Data Splits | Split | Count | | ----------- | ----------- | | `train` | 115,956 | | `validation` | 1,000 | | `test` | 2,000 | ### Citation Information ``` @inproceedings{adams-etal-2023-desired, title = "What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization", author = "Adams, Griffin and Nguyen, Bichlien and Smith, Jake and Xia, Yingce and Xie, Shufang and Ostropolets, Anna and Deb, Budhaditya and Chen, Yuan-Jyue and Naumann, Tristan and Elhadad, No{\'e}mie", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.587", doi = "10.18653/v1/2023.acl-long.587", pages = "10520--10542", abstract = "Summarization models often generate text that is poorly calibrated to quality metrics because they are trained to maximize the likelihood of a single reference (MLE). To address this, recent work has added a calibration step, which exposes a model to its own ranked outputs to improve relevance or, in a separate line of work, contrasts positive and negative sets to improve faithfulness. While effective, much of this work has focused on \textit{how} to generate and optimize these sets. Less is known about \textit{why} one setup is more effective than another. In this work, we uncover the underlying characteristics of effective sets. For each training instance, we form a large, diverse pool of candidates and systematically vary the subsets used for calibration fine-tuning. Each selection strategy targets distinct aspects of the sets, such as lexical diversity or the size of the gap between positive and negatives. On three diverse scientific long-form summarization datasets (spanning biomedical, clinical, and chemical domains), we find, among others, that faithfulness calibration is optimal when the negative sets are extractive and more likely to be generated, whereas for relevance calibration, the metric margin between candidates should be maximized and surprise{--}the disagreement between model and metric defined candidate rankings{--}minimized.", } ```