File size: 2,432 Bytes
7fb0238
 
852d4ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de6402a
 
 
 
 
 
 
 
 
 
 
 
 
 
f282230
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12a1ef6
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
pipeline_tag: summarization
datasets:
- samsum
language:
- en
metrics:
- rouge
library_name: transformers
widget:
- text: | 
    John: Hey! I've been thinking about getting a PlayStation 5. Do you think it is worth it? 
    Dan: Idk man. R u sure ur going to have enough free time to play it? 
    John: Yeah, that's why I'm not sure if I should buy one or not. I've been working so much lately idk if I'm gonna be able to play it as much as I'd like.
- text: | 
    Sarah: Do you think it's a good idea to invest in Bitcoin?
    Emily: I'm skeptical. The market is very volatile, and you could lose money.
    Sarah: True. But there's also a high upside, right?
- text: | 
    Madison: Hello Lawrence are you through with the article?
    Lawrence: Not yet sir.
    Lawrence: But i will be in a few.
    Madison: Okay. But make it quick.
    Madison: The piece is needed by today
    Lawrence: Sure thing
    Lawrence: I will get back to you once i am through."

---

# Description

This model is a specialized adaptation of the <b>facebook/bart-large-xsum</b>, fine-tuned for enhanced performance on dialogue summarization using the <b>SamSum</b> dataset.

## Development
- Kaggle Notebook: [Text Summarization with Large Language Models](https://www.kaggle.com/code/lusfernandotorres/text-summarization-with-large-language-models)

## Usage

```python
from transformers import pipeline

model = pipeline("summarization", model="luisotorres/bart-finetuned-samsum")

conversation = '''Sarah: Do you think it's a good idea to invest in Bitcoin?
    Emily: I'm skeptical. The market is very volatile, and you could lose money.
    Sarah: True. But there's also a high upside, right?                                     
'''
model(conversation)
```

## Training Parameters
```python
evaluation_strategy = "epoch",
save_strategy = 'epoch',
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
seed = 42,
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=2,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=4,
predict_with_generate=True,
fp16=True,
report_to="none"
```

## Reference
This model is based on the original <b>BART</b> architecture, as detailed in:

Lewis et al. (2019). BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. [arXiv:1910.13461](https://arxiv.org/abs/1910.13461)