Having an issue with AMD GPU on Windows

#164
by BABIFIT - opened

I'm getting this error:
Traceback (most recent call last):
File "D:\PGRM\DecSD\diffusers\examples\inference\save_onnx.py", line 66, in
convert_to_onnx(pipe.unet, pipe.vae.post_quant_conv, pipe.vae.decoder, text_encoder, height=512, width=512)
File "D:\PGRM\DecSD\diffusers\examples\inference\save_onnx.py", line 41, in convert_to_onnx
traced_model = torch.jit.trace(unet, check_inputs[0], check_inputs=[check_inputs[1]], strict=True)
File "C:\Users\MYNAME\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\jit_trace.py", line 759, in trace
return trace_module(
File "C:\Users\MYNAME\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\jit_trace.py", line 976, in trace_module
module._c._create_method_from_trace(
RuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for list, use a tuple instead. for dict, use a NamedTuple instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.

I have no idea how to solve it, and changing strict=True to False gave me a completely different error

Same here

deleted

Same here

Another one here

What exact card are you using OP?

What exact card are you using OP?

6750xt on windows 11 paired with r5 3600 and 32gb ram

And another one here :)

RX 6700 XT here

Same here for AMD Radeon RX 6800S on Windows 11 with AMD Ryzen 9

Yeah yeah Me Too!!! What is the fix for this!

Traceback (most recent call last):
File "E:\AI\diffusers-dml\examples\inference\save_onnx.py", line 16, in
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=lms, use_auth_token=True)
File "E:\AI\diffusers-dml\src\diffusers\pipeline_utils.py", line 240, in from_pretrained
load_method = getattr(class_obj, load_method_name)
TypeError: getattr(): attribute name must be string

Same here with a RX 580

C:\Users**\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\models\resnet.py:122: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if hidden_states.shape[0] >= 64:
Traceback (most recent call last):
File "C:\r\diffusers\examples\inference\save_onnx.py", line 66, in
convert_to_onnx(pipe.unet, pipe.vae.post_quant_conv, pipe.vae.decoder, text_encoder, height=512, width=512)
File "C:\r\diffusers\examples\inference\save_onnx.py", line 41, in convert_to_onnx
traced_model = torch.jit.trace(unet, check_inputs[0], check_inputs=[check_inputs[1]], strict=True)
File "C:\Users**\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\jit_trace.py", line 759, in trace
return trace_module(
File "C:\Users**\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\jit_trace.py", line 976, in trace_module
module._c._create_method_from_trace(
RuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for list, use a tuple instead. for dict, use a NamedTuple instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.

OH CMON!) rx 6700xt

C:\Users*******\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\models\resnet.py:122: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if hidden_states.shape[0] >= 64:
Traceback (most recent call last):
File "C:\Windows\System32\diffusers\examples\inference\save_onnx.py", line 66, in
convert_to_onnx(pipe.unet, pipe.vae.post_quant_conv, pipe.vae.decoder, text_encoder, height=512, width=512)
File "C:\Windows\System32\diffusers\examples\inference\save_onnx.py", line 41, in convert_to_onnx
traced_model = torch.jit.trace(unet, check_inputs[0], check_inputs=[check_inputs[1]], strict=True)
File "C:\Users\a6out\AppData\Roaming\Python\Python310\site-packages\torch\jit_trace.py", line 759, in trace
return trace_module(
File "C:\Users\a6out\AppData\Roaming\Python\Python310\site-packages\torch\jit_trace.py", line 976, in trace_module
module._c._create_method_from_trace(
RuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for list, use a tuple instead. for dict, use a NamedTuple instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.

I had slightly more luck with this one :)

https://github.com/nod-ai/SHARK/blob/main/shark/examples/shark_inference/stable_diffusion/stable_diffusion_amd.md

Still in vacation, so not at my rig to go the fully Python way - but the exe does it's magic, provided you install the right AMD drivers

Same error here (RX 6800 XT)

Traceback (most recent call last):
File "C:\Diffusion\diffusers-dml\examples\inference\save_onnx.py", line 66, in
convert_to_onnx(pipe.unet, pipe.vae.post_quant_conv, pipe.vae.decoder, text_encoder, height=512, width=512)
File "C:\Diffusion\diffusers-dml\examples\inference\save_onnx.py", line 41, in convert_to_onnx
traced_model = torch.jit.trace(unet, check_inputs[0], check_inputs=[check_inputs[1]], strict=True)
File "C:\Diffusion\amd_venv\lib\site-packages\torch\jit_trace.py", line 759, in trace
return trace_module(
File "C:\Diffusion\amd_venv\lib\site-packages\torch\jit_trace.py", line 976, in trace_module
module._c._create_method_from_trace(
RuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for list, use a tuple instead. for dict, use a NamedTuple instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.

Just joining the rest.

Same here with a RX 580

At my wits end! Also same here with Windows 10 and an RX 580.

RuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for list, use a tuple instead. for dict, use a NamedTuple instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.

Have the same problen with RX 6750 xt

I had this same issue on RX 6800 XT and came across this guide from git user 'averad' using SD 1.5 and worked successfully.
https://gist.github.com/averad/256c507baa3dcc9464203dc14610d674

Note: I set protobuf version to 3.20.2 instead of 3.20.1 due to this error:
onnx 1.13.1 requires protobuf<4,>=3.20.2

Greetings everyone! Following ronniediaz's instructions worked for me. Thank you so much, sir/ma'am!
(Copied here):
"I had this same issue on RX 6800 XT and came across this guide from git user 'averad' using SD 1.5 and worked successfully.
https://gist.github.com/averad/256c507baa3dcc9464203dc14610d674

Note: I set protobuf version to 3.20.2 instead of 3.20.1 due to this error:
onnx 1.13.1 requires protobuf<4,>=3.20.2"

Sign up or log in to comment