license: other
license_name: bria-2.3
license_link: https://bria.ai/bria-huggingface-model-license-agreement/
inference: false
tags:
- text-to-image
- controlnet model
- legal liability
- commercial use
extra_gated_prompt: >-
This model weights by BRIA AI can be obtained after a commercial license is
agreed upon. Fill in the form below and we reach out to you.
extra_gated_fields:
Name: text
Company/Org name: text
Org Type (Early/Growth Startup, Enterprise, Academy): text
Role: text
Country: text
Email: text
By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below: checkbox
BRIA 2.3 ControlNet Pose Model Card
BRIA 2.3 ControlNet-Pose, trained on the foundation of BRIA 2.3 Text-to-Image, enables the generation of high-quality images guided by a textual prompt and the human pose estimation of the input image. This allows for the creation of different variations of an image, all sharing the same human pose.
BRIA 2.3 was trained from scratch exclusively on licensed data from our esteemed data partners. Therefore, they are safe for commercial use and provide full legal liability coverage for copyright and privacy infringement, as well as harmful content mitigation. That is, our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.
Join our Discord community for more information, tutorials, tools, and to connect with other users!
Model Description
Developed by: BRIA AI
Model type: ControlNet for Latent diffusion
License: bria-2.3
Model Description: ControlNet Pose for BRIA 2.3 Text-to-Image model. The model generates images guided by text and the pose estimation image of the conditioned image.
Resources for more information: BRIA AI
Get Access
BRIA 2.3 ControlNet-Pose requires access to BRIA 2.3 Text-to-Image. For more information, click here.
Code example using Diffusers
$ pip install controlnet_aux
- Install
diffusers
and related packages:
$ pip install diffusers transformers accelerate
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline
from controlnet_aux import OpenposeDetector
import torch
from diffusers.utils import load_image
from PIL import Image
controlnet = ControlNetModel.from_pretrained(
"briaai/BRIA-2.3-ControlNet-Pose",
torch_dtype=torch.float16
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"briaai/BRIA-2.3",
controlnet=controlnet,
torch_dtype=torch.float16,
)
pipe.to("cuda")
prompt = "Two kids in bright orange jackets play near a blue tent in a forest with silver-leafed trees.,photography"
negative_prompt = "Logo,Watermark,Text,Ugly,Morbid,Extra fingers,Poorly drawn hands,Mutation,Blurry,Extra limbs,Gross proportions,Missing arms,Mutated hands,Long neck,Duplicate,Mutilated,Mutilated hands,Poorly drawn face,Deformed,Bad anatomy,Cloned face,Malformed limbs,Missing legs,Too many fingers"
# Calculate Pose image
openpose = OpenposeDetector.from_pretrained('lllyasviel/ControlNet')
image = load_image("https://huggingface.co/briaai/BRIA-2.3-ControlNet-Pose/resolve/main/test_image.jpg")
pose_image = openpose(image, include_body=True, include_hand=True, include_face=True)[0]
if type(pose_image) == tuple:
pose_image = pose_image[0]
image = pipe(prompt=prompt, negative_prompt=negative_prompt, image=pose_image, controlnet_conditioning_scale=1.0, height=1024, width=1024).images[0]