Image preprocessing?
Do these models (or Mochi Diffusion) do the image preprocessing necessary to use a normal image for the ControlNets?
For example, for the Normal Map ControlNet, does the user have to supply a normal map as an image, or will one be generated from a normal image? Or for the Canny ControlNet, does the image need to be only the edges, or will the program generate the processed edges?
I know some other GUIs do the preprocessing for you, so you can provide a normal image in several scenarios and have an appropriate processed image be generated as a depth map, normal map, edge map, etc.
Mochi Diffusion does not have the ability to create preprocessed images for ControlNet, so please use the spaces in the URL below to preprocess your images.
https://huggingface.co/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu