T5-Base-Sum / README.md
Vijayendra's picture
Update README.md
fd4c6bc verified
|
raw
history blame
2.29 kB
metadata
license: mit
language:
  - en
base_model:
  - google-t5/t5-base
datasets:
  - abisee/cnn_dailymail
metrics:
  - rouge

T5-Base-Sum

This model is a fine-tuned version of T5 for summarization tasks. It was trained on various articles and is hosted on Hugging Face for easy access and use.

Model Usage

Below is an example of how to load and use this model for summarization:

from transformers import T5ForConditionalGeneration, T5Tokenizer

# Load the model and tokenizer from Hugging Face
model = T5ForConditionalGeneration.from_pretrained("Vijayendra/T5-Base-Sum")
tokenizer = T5Tokenizer.from_pretrained("Vijayendra/T5-Base-Sum")

# Example of using the model for summarization
article = """
Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company
said.  The policy includes the termination of accounts of anti-vaccine influencers.  Tech giants have been criticised for not doing more to
counter false health information on their sites.  In July, US PresidentJoe Biden said social media platforms were largely responsible for
people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue.  YouTube, which is owned
by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation 
about Covid vaccines.  In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about
vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B."We're expanding our medical
misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and
effective by local health authorities and the WHO," the post said, referring to the World Health Organization.
"""
inputs = tokenizer.encode("summarize: " + article, return_tensors="pt", max_length=512, truncation=True)
summary_ids = model.generate(inputs, max_length=150, min_length=50, length_penalty=2.0, num_beams=4, early_stopping=True)

# Decode and print the summary
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary)