AssertionError: Input tensors must be in dtype of torch.float16 or torch.bfloat16

#2
by flyway - opened

got prompt
Failed to validate prompt for output 45:

  • LoadImage 40:
    • Custom validation failed for node: image - Invalid image file: fafa6a09-abec-4aff-b64d-bd313bce21a1.png
      Output will be ignored
      invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
      got prompt
      Advanced Vision Model: clip_vision_siglip2_so400m_512 detected
      Requested to load CLIPVisionModelProjection
      loaded completely 8720.262394714355 788.7587585449219 True
      !!! Exception during processing !!! Input tensors must be in dtype of torch.float16 or torch.bfloat16
      Traceback (most recent call last):
      File "I:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
      output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "I:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1027, in encode
    output = clip_vision.encode_image(image, crop=crop_image)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\comfy\clip_vision.py", line 70, in encode_image
    out = self.model(pixel_values=pixel_values, intermediate_output=-2)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 238, in forward
    x = self.vision_model(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 206, in forward
    x, i = self.encoder(x, mask=None, intermediate_output=intermediate_output)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 21, in forward
    out = optimized_attention(q, k, v, self.heads, mask)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 448, in attention_pytorch
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\sageattention\core.py", line 132, in sageattn
    return sageattn_qk_int8_pv_fp8_cuda(q, k, v, tensor_layout=tensor_layout, is_causal=is_causal, sm_scale=sm_scale, return_lse=return_lse, pv_accum_dtype="fp32+fp32")
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_dynamo\eval_frame.py", line 745, in _fn
    return fn(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^
    File "I:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\sageattention\core.py", line 668, in sageattn_qk_int8_pv_fp8_cuda
    assert dtype in [torch.float16, torch.bfloat16], "Input tensors must be in dtype of torch.float16 or torch.bfloat16"
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

AssertionError: Input tensors must be in dtype of torch.float16 or torch.bfloat16

Prompt executed in 2.11 seconds

It looks like you have a plugin that overwrites torch SDP attn function with sage attn. This is likely causing issues.

It looks like you have a plugin that overwrites torch SDP attn function with sage attn. This is likely causing issues.

Yes, I installed sageattention, and after uninstalling it, it works fine.

flyway changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment