YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

bloom-560m-RLHF-SD2-prompter-aesthetic - bnb 4bits

Original model description:

license: bigscience-bloom-rail-1.0 tags:

  • stable-diffusion
  • diffusion model-index:
  • name: bloom-560m-RLHF-SD2-prompter results: []

datasets:

  • Gustavosta/Stable-Diffusion-Prompts

widget:

  • text: "Prompt: "

inference: parameters: eos_token_id: 2 max_length: 128 do_sample: true

The RAT (RLHF-Aesthetic Tuned model for prompt synthesis)

COLAB DEMO INCLUDING STABLE DIFFUSION: https://colab.research.google.com/github/aicrumb/doohickey/blob/main/rlhf_prompt_tuner.ipynb

This is a further finetuned version of crumb/bloom-560m-RLHF-SD2-prompter to optimize for aesthetic score with models from https://github.com/crowsonkb/simulacra-aesthetic-models instead of me hand scoring each image

donate so i can do this on real hardware : https://github.com/aicrumb/aicrumb/blob/main/README.md

trained at bs=32, lr=0.0001, only tuning biases and layernorm weights

Example usage

# Install libraries needed to run the models
!pip install transformers diffusers accelerate -qq

# Import the libraries
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
from transformers import pipeline
import torch

# This is the model that the transformer was finetuned to generate prompts for
model_id = "stabilityai/stable-diffusion-2-base"

# Use the Euler scheduler here
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, revision="fp16", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

# Load the transformer model
prompt_pipe = pipeline("text-generation", model="crumb/bloom-560m-RLHF-SD2-prompter-aesthetic")
prompt = "cool landscape"

# Auto-complete prompt
prompt = "<s>Prompt: " + prompt + ","
extended_prompt = prompt_pipe(prompt, do_sample=True, max_length=42)[0]['generated_text']
extended_prompt = extended_prompt[10:]
print("Prompt is now: ", extended_prompt)

# Generate image
image = pipe(extended_prompt).images[0]  

image.save("output.png")
image

Limitations

Aesthetic scoring models have been shown to have very large biases, and one I noticed is it really likes images of women no matter the actual quality, so those were optimized for more than other things.

Also it fell into the trap of rlhf models, it gets kinda same-ey, so if you don't like the general "stable diffusion, trending on artstation" look this might not be for you.

Downloads last month
6
Safetensors
Model size
413M params
Tensor type
F32
FP16
U8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.