Bfloat16 problem
#2
by
Aniel99
- opened
Hy, I try to run this on Jetson device on a real time video, but with some modification on video preprocessing. I got this error
expected mat1 and mat2 to have the same dtype, but got: c10::BFloat16 != c10::Half
If I change here :
video = image_processor.preprocess(image_np, return_tensors="pt")["pixel_values"].cuda().bfloat16()
to float16 it works.
thank you!
If I add this line after model.eval(), it seems working>
model = model.to(torch.bfloat16)