Wrong "model_max_length" set in tokenizer?
#3
by
ntkuhn
- opened
It seems like the tokenizer configuration has the setting "model_max_length": 1024, while the model can take inputs up to 2048. Is this an oversight?
That is correct. Thanks for pointing this out! Fixed.
rooa
changed discussion status to
closed