Question about token limiations

#2
by jpjp9292 - opened

First of all, your model seems extraordinarily useful for those who wanna practice chart generation from texts.
I wanna implement your model with my datasets. But i got one issue is that SD models have token limitations of 75.

how did you overcome limitations? coz in the paper, "the average length of the captions is 95.41
tokens, measured by the CLIP tokenizer"

could you please share tips how to overcome token limitations?

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment