GuernikaModelConverter for arm64 please

#18
by andykoko - opened

I see that the torch packaged by GuernikaModelConverter is an x86_64 arch, which runs quite slowly on Apple M1.
Can you release an arm64 version?

Guernika org

@andykoko where can you see that?

/private/var/folders/rz/mv09yyqj2zs340l11vl_1m080000gn/
Under a similar system cache folder, the python environment you packaged will be released here.

Guernika org

How can you know it's x86_64? I am running on Apple Silicon too so it should be using the correct torch.

I use macos file tool
> file /private/var/folders/rz/mv09yyqj2zs340l11vl_1m080000gn/T/_MEIO2Vuko/torch/lib/libtorch_cpu.dylib
> Mach-O 64-bit dynamically linked shared library x86_64
All dynamic library files released are x86_64.
I used GuernikaModelConverter on an old intel cpu mac, and the conversion speed is 2-3 times faster than on M1.

Guernika org

That is correct, those do seem to be x86_64, I will take a look at this but I am able to use MPS which, if I'm not mistaken, should not be available on the Intel release, if you have any instructions on how to install arm64 PyTorch it would probably save me some time.

pip should be install arm64 pytorch automatically, Maybe you have enabled Rosetta for Terminal.app, and all command line operations will treat your system architecture as x86_64.

Guernika org

That's not it, do you have torch installed and those libraries are arm64?

I checked my torch v2.0.0, it's arm64, and the libtorch_cpu.dylib file size only 169 MB.

Guernika org

Okay, found it! It was my conda version all this time, I think you just saved me a loooot of time, I will try cleaning this up, getting this working again and generating a new version. Thank you for bringing this up!

There seems to be a small problem on GuernikaModelConverter.
A 512x768 model was selected in Guernika.app. The picture box on the left is wrong horizontal, and it returns to vertical when the model is loaded.
This model use GuernikaModelConverter to convert from safetensors to coreml,
However, there is no such problem in convert a model from diffusers to coreml.

Guernika org

@andykoko could you test this version and let me know if it's working?

Also, did you set a custom size when converting or was it the default size of the model? It shouldn't be doing anything different if loading from safetensors or not.

I just tested the latest GuernikaModelConverter_arm,
I converted a safetensors model, use custom size 768x512,
It only takes 11 minutes to complete on M1, And the correct scale picture box is displayed in Guernika.app.
The old x86_64 version takes about 1 hour to convert.
Thank you GuiyeC

Bug report:
The ControlNet model converted from diffusers and checkpoint cannot be recognized by Guernika.app.
I found that the reason is that the guernika.json file lacks key: "method",
Manually add "method": "depth", and then recognized by Guernika.app.

Guernika org

@andykoko What do you mean it cannot be recognized? It should still work but unless the "method" is set to "depth" it won't do the preprocessing of generating the depth map.

The method is recognized based on the name of the ControlNet (diffusers folder or checkpoint file), it has to have "depth" on its name. Can you share what checkpoint ere you trying to convert?

Since you only provide controlnet 512x512, it was downloaded from lllyasviel/sd-controlnet-canny.
I don't know that you check the type according to the name of the folder. I used to do it like this:
For example, I named the diffusers folder to "Canny", Includ config.json and diffusion_pytorch_model.bin.
and there is no "method": "canny" in guernika.json after conversion.

Is the Preprocess function incompatible with 512x768 or 768x512 scale?
The model is 512x768. After opening Preprocess, the picture will be automatically cropped to 1:1.
Is Preprocess a must? After I turn it off, it seems to work normally.

22.png

Guernika org

@andykoko sorry for the late response, I was working on updating the core to allow loading on demand, improve memory management, support for multiple ControlNets at the same time...
I did also notice a bug where the image was not being preprocessed correctly, that's why it may seem that it's not doing anything different, that should be working now with support for other aspect ratios in the latest version of Guernika 5.0.0.

About the method being recognized, I could add a selector in the Guernika Model Converter, I have to check because I've been working with more ControlNets now and there are a lot more now, so I might have to think something different or try to preprocess other methods too.

@GuiyeC Thanks, I have seen the results of your fix in Guernika 5.0. The design of automatic screening and model matching ControlNet is great.
However, it seems that the program often crashes, mainly when clicking to switch between different pictures, and the model is not running at this time.

In addition, GuernikaModelConverter does not to be able to convert the inpainting model in safetensors format. the reason is the diffusers package cannot be recognized. I use the py script to add the "--original_config_file ./v1-inpainting-inference.yaml" parameter, convert the safetensors to diffusers, Then GuernikaModelConverter can convert the inpainting model.

Guernika org

@andykoko what do you mean it crashes when switching different pictures? the input of the ControlNet? if you give me steps to reproduce it I might be able to fix it 🙏

As for the GuernikaModelConverter, if you give the yaml file the same name as the safetensors I believe it should recognize the correct configuration, for example:

InpaintingModel.safetensors
InapintingModel.yaml

They both have to be together in the same folder.

Sorry, I haven't been able to reproduce the crash for the time being. I uploaded the operation video. Sometimes it will get stuck for 3-5 seconds when switching pictures, and sometimes it will crash.

Sign up or log in to comment