ControlNet V1.1
Create detailed images from sketches and other inputs
I agree strongly with 1 and 2. I started playing with Stable Diffusion in Oct 2023 and had trouble getting everything installed and working smooth locally. I then started using various demos in Spaces on Hugging Face. Shortly after that I had a few video projects with Shutterstock and HP where they were rolling out very simple working demos of txt to img products then txt to 3D.
Right then I started telling everyone who asked me about AI image generation that there are going to be many products that "just work" and require no technical knowledge but users would benefit from learning the concepts and parameters to make the best images. Using the car analogy, some of us want to service and maintain our vehicles and others pull into the dealership anytime a light comes on. Some people want to go A>B with reliable efficient transportation and others demand status and performance.
The key to user adoption and success in image generation seems to lie in quality of outcomes. Right now many of the images being generated are so similar and corny. Color palettes, lighting, and texture often make you wince as a designer and while the technology is mind blowing the deliverable is what the user cares about. Now if you spend time refining and experimenting with the prompts you fix and enhance that issue but thats time consuming.