Arnon2 commited on
Commit
31f67f4
·
verified ·
1 Parent(s): 2a671cb

Change AI response limitations.

Browse files

Hello

@dvruette
. I wasn't sure how to get your attention, so I've chosen this method. First of all, I would like to say that these trained "concepts" are indeed very successful. As an amateur writer with a lot of ideas but not quite an extensive vocabulary, I find this space perfect for adding colour to my initial drafts. However, there seems to be one detail that is probably unnecessary. An AI response is limited to 512 "tokens", but I have noticed that AI actually does the calculations before it starts writing, so it just stops in the middle of a sentence when it has apparently already calculated the rest of the sentence. So this limitation is unnecessary, as AI cannot write to infinity even without it, as it is already limited by 60 seconds of "thinking" time. I think it would be better to remove this line or change it to a larger value. Of course, I could be wrong, but if I am, I'd rather know than be completely ignored. Thanks in advance, and best of luck with other projects. Oh, and it seems the length penalty does nothing, but that could just be my selective testing.

Files changed (1) hide show
  1. main.py +1 -1
main.py CHANGED
@@ -127,7 +127,7 @@ def generate_completion(
127
  streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
128
 
129
  generation_kwargs = dict(
130
- max_new_tokens=512,
131
  repetition_penalty=repetition_penalty,
132
  length_penalty=length_penalty,
133
  streamer=streamer,
 
127
  streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
128
 
129
  generation_kwargs = dict(
130
+ max_new_tokens=1024,
131
  repetition_penalty=repetition_penalty,
132
  length_penalty=length_penalty,
133
  streamer=streamer,