YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

bloom-560m-RLHF-SD2-prompter - bnb 4bits

Original model description:

license: bigscience-bloom-rail-1.0 tags:

  • stable-diffusion
  • diffusion model-index:
  • name: bloom-560m-RLHF-SD2-prompter results: []

datasets:

  • Gustavosta/Stable-Diffusion-Prompts

widget:

  • text: "Prompt: "

inference: parameters: eos_token_id: 2 max_length: 128 do_sample: true

BLOOM-560m RLHF SD2 Prompter

COLAB DEMO INCLUDING STABLE DIFFUSION: https://colab.research.google.com/github/aicrumb/doohickey/blob/main/rlhf_prompt_tuner.ipynb

Using RLHF (Reinforcement Learning from Human Feedback) to finetune mrm8488/bloom-560m-finetuned-sd-prompts further for SD2.0

batch_size = 16
learning_rate = 0.001 # this is why I didn't have to spend _forever_ on it

Generate extension with "<s>Prompt: " and whatever your normal prompt is.

I did this myself. I sat down and just ranked images for so long. It's gone through a couple iterations. Only the biases and layernorm weights were trained. The commit messages are a MESS. First iteration of this project

donate so i can do this on real hardware : https://github.com/aicrumb/aicrumb/blob/main/README.md

Example usage

# Install libraries needed to run the models
!pip install transformers diffusers accelerate -qq

# Import the libraries
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
from transformers import pipeline
import torch

# This is the model that the transformer was finetuned to generate prompts for
model_id = "stabilityai/stable-diffusion-2-base"

# Use the Euler scheduler here
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, revision="fp16", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

# Load the transformer model
prompt_pipe = pipeline("text-generation", model="crumb/bloom-560m-RLHF-SD2-prompter")
prompt = "cool landscape"

# Auto-complete prompt
prompt = "<s>Prompt: " + prompt + ","
extended_prompt = prompt_pipe(prompt, do_sample=True, max_length=42)[0]['generated_text']
extended_prompt = extended_prompt[10:]
print("Prompt is now: ", extended_prompt)

# Generate image
image = pipe(extended_prompt).images[0]  

image.save("output.png")
image

Prompt is now: cool landscape, concept art

Prompt is now: cool landscape, concept art, sharp focus, digital painting

short additions, they work though I guess (results vary)

It's also very good at generating prompts by itself, with just the "Prompt:" prompt.

<s>Prompt: 1 0 th century, highly detailed, concept art, cinematic lighting, unreal engine, trending on artstation, artstation hd, artstation hq, very very detailed

Further testing to be done in this area (automated training with aesthetic predicting models, larger data collection about prompt scores, better training in general)

Also, enjoy this graphic I had to make myself because I kept being indecisive of the reward methodology

Downloads last month
6
Safetensors
Model size
413M params
Tensor type
F32
FP16
U8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.