create a README with basic info
Browse files
README.md
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license:
|
3 |
+
- bsd-3-clause
|
4 |
+
train-eval-index:
|
5 |
+
- config: kmfoda--booksum
|
6 |
+
task: summarization
|
7 |
+
task_id: summarization
|
8 |
+
splits:
|
9 |
+
eval_split: test
|
10 |
+
col_mapping:
|
11 |
+
chapter: text
|
12 |
+
summary_text: target
|
13 |
+
---
|
14 |
+
|
15 |
+
# BookSum
|
16 |
+
|
17 |
+
BookSum is a long-form summarization dataset released by SalesForce Research in December 2021.
|
18 |
+
|
19 |
+
> The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases. While relevant, such datasets will offer limited challenges for future generations of text summarization systems. We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization. Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level. The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures. To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.
|
20 |
+
|
21 |
+
|
22 |
+
## Links
|
23 |
+
|
24 |
+
- [paper](https://arxiv.org/abs/2105.08209) by SalesForce Research
|
25 |
+
- [GitHub repo](https://github.com/salesforce/booksum)
|