This model doesn't work for long sentences.

#3
by Akshaykdubey - opened

How to make this model work for long sentences(basically paragraphs)? I believe we can increase the size of max_length parameter below but it also works till a limit.

def correct_grammar(input_text,num_return_sequences):
  batch = tokenizer([input_text],truncation=True,padding='max_length',max_length=64, return_tensors="pt").to(torch_device)
  translated = model.generate(**batch,max_length=64,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5)
  tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
  return tgt_text

Any suggestions?

Sign up or log in to comment