Update README.md
Browse files
README.md
CHANGED
@@ -22,12 +22,16 @@ should probably proofread and complete it, then remove this comment. -->
|
|
22 |
|
23 |
# distilgpt2-finetuned-finance
|
24 |
|
25 |
-
This model is a fine-tuned version of
|
26 |
- [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail)
|
27 |
- [samsum](https://huggingface.co/datasets/samsum)
|
28 |
- [xsum](https://huggingface.co/datasets/xsum)
|
29 |
- [ccdv/pubmed-summarization](https://huggingface.co/datasets/ccdv/pubmed-summarization)
|
30 |
|
|
|
|
|
|
|
|
|
31 |
## Training and evaluation data
|
32 |
|
33 |
One can reproduce the dataset using the following code:
|
|
|
22 |
|
23 |
# distilgpt2-finetuned-finance
|
24 |
|
25 |
+
This model is a further fine-tuned version of [distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the the combination of 4 different summarisation datasets:
|
26 |
- [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail)
|
27 |
- [samsum](https://huggingface.co/datasets/samsum)
|
28 |
- [xsum](https://huggingface.co/datasets/xsum)
|
29 |
- [ccdv/pubmed-summarization](https://huggingface.co/datasets/ccdv/pubmed-summarization)
|
30 |
|
31 |
+
Please check out the offical model page and paper:
|
32 |
+
- [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6)
|
33 |
+
- [Pre-trained Summarization Distillation](https://arxiv.org/abs/2010.13002)
|
34 |
+
|
35 |
## Training and evaluation data
|
36 |
|
37 |
One can reproduce the dataset using the following code:
|