Errors related to the amount of tokens
I have fixed it you can check it now!!
I am still getting it :(
This is the log of the conversation:
😃: hi
🤖: Step 1
🤖: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: bEdwiED8k_mCIVSgeyiDu)
Input validation error: inputs
tokens + max_new_tokens
must be <= 16000. Given: 29298 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
🤖: Step 1 | Input-tokens:13,768 | Output-tokens:51 | Duration: 0.08
🤖: -----
🤖: Step 2
🤖: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: QZ5D8iKHaSZ4eaedGkavj)
Input validation error: inputs
tokens + max_new_tokens
must be <= 16000. Given: 29453 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
🤖: Step 2 | Input-tokens:13,768 | Output-tokens:51 | Duration: 0.07
🤖: -----
🤖: Step 3
🤖: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: 1zsEFIImfge6ei7eggWFt)
Input validation error: inputs
tokens + max_new_tokens
must be <= 16000. Given: 29612 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
🤖: Step 3 | Input-tokens:13,768 | Output-tokens:51 | Duration: 0.07
🤖: -----
🤖: Step 4
🤖: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: Qh6_HkWxZYFjqUOmLBMYU)
Input validation error: inputs
tokens + max_new_tokens
must be <= 16000. Given: 29768 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
🤖: Step 4 | Input-tokens:13,768 | Output-tokens:51 | Duration: 0.07
🤖: -----
🤖: Step 5
🤖: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: SyFHZIqCAa3-lRGQWG9t5)
Input validation error: inputs
tokens + max_new_tokens
must be <= 16000. Given: 29926 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
🤖: Step 5 | Input-tokens:13,768 | Output-tokens:51 | Duration: 0.07
🤖: -----
🤖: Step 6
🤖: Error in generating model output:
422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions (Request ID: u-Q7hu_Wi-wJ0Tbh4vSjs)
Input validation error: inputs
tokens + max_new_tokens
must be <= 16000. Given: 30084 inputs
tokens and 2096 max_new_tokens
Make sure 'text-generation' task is supported by the model.
🤖: Step 6 | Input-tokens:13,768 | Output-tokens:51 | Duration: 0.06
🤖: -----