google/flan-t5-base finetuned on xsum using LoRA with adapter-transformers

Usage

Use the original flan-t5-base tokenizer:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("tripplyons/flan-t5-base-xsum")

input_text = "summarize: The ex-Reading defender denied fraudulent trading charges relating to the Sodje Sports Foundation - a charity to raise money for Nigerian sport. Mr Sodje, 37, is jointly charged with elder brothers Efe, 44, Bright, 50 and Stephen, 42. Appearing at the Old Bailey earlier, all four denied the offence. The charge relates to offences which allegedly took place between 2008 and 2014. Sam, from Kent, Efe and Bright, of Greater Manchester, and Stephen, from Bexley, are due to stand trial in July. They were all released on bail."
input_ids = tokenizer([input_text], max_length=512, truncation=True, padding=True, return_tensors='pt')['input_ids']
output = model.generate(input_ids, max_length=512)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.