luisotorres's picture
Update README.md
12a1ef6
|
raw
history blame
2.43 kB
metadata
pipeline_tag: summarization
datasets:
  - samsum
language:
  - en
metrics:
  - rouge
library_name: transformers
widget:
  - text: >
      John: Hey! I've been thinking about getting a PlayStation 5. Do you think
      it is worth it? 

      Dan: Idk man. R u sure ur going to have enough free time to play it? 

      John: Yeah, that's why I'm not sure if I should buy one or not. I've been
      working so much lately idk if I'm gonna be able to play it as much as I'd
      like.
  - text: >
      Sarah: Do you think it's a good idea to invest in Bitcoin?

      Emily: I'm skeptical. The market is very volatile, and you could lose
      money.

      Sarah: True. But there's also a high upside, right?
  - text: |
      Madison: Hello Lawrence are you through with the article?
      Lawrence: Not yet sir.
      Lawrence: But i will be in a few.
      Madison: Okay. But make it quick.
      Madison: The piece is needed by today
      Lawrence: Sure thing
      Lawrence: I will get back to you once i am through."

Description

This model is a specialized adaptation of the facebook/bart-large-xsum, fine-tuned for enhanced performance on dialogue summarization using the SamSum dataset.

Development

Usage

from transformers import pipeline

model = pipeline("summarization", model="luisotorres/bart-finetuned-samsum")

conversation = '''Sarah: Do you think it's a good idea to invest in Bitcoin?
    Emily: I'm skeptical. The market is very volatile, and you could lose money.
    Sarah: True. But there's also a high upside, right?                                     
'''
model(conversation)

Training Parameters

evaluation_strategy = "epoch",
save_strategy = 'epoch',
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
seed = 42,
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=2,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=4,
predict_with_generate=True,
fp16=True,
report_to="none"

Reference

This model is based on the original BART architecture, as detailed in:

Lewis et al. (2019). BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. arXiv:1910.13461