yujiepan/dreamshaper-8-lcm-openvino

This model applies latent-consistency/lcm-lora-sdv1-5 on base model Lykon/dreamshaper-8, and is converted as OpenVINO FP16 format.

Usage

from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline
pipeline = OVStableDiffusionPipeline.from_pretrained(
    'yujiepan/dreamshaper-8-lcm-openvino',
    device='CPU',
)
prompt = 'cute dog typing at a laptop, 4k, details'
images = pipeline(prompt=prompt, num_inference_steps=8, guidance_scale=1.0).images

output image

Scripts

The model is generated by the following codes:

import torch
from diffusers import AutoPipelineForText2Image, LCMScheduler
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline

base_model_id = "Lykon/dreamshaper-8"
adapter_id = "latent-consistency/lcm-lora-sdv1-5"
save_torch_folder = './dreamshaper-8-lcm'
save_ov_folder = './dreamshaper-8-lcm-openvino'

torch_pipeline = AutoPipelineForText2Image.from_pretrained(
    base_model_id, torch_dtype=torch.float16, variant="fp16")
torch_pipeline.scheduler = LCMScheduler.from_config(
    torch_pipeline.scheduler.config)
# load and fuse lcm lora
torch_pipeline.load_lora_weights(adapter_id)
torch_pipeline.fuse_lora()
torch_pipeline.save_pretrained(save_torch_folder)

ov_pipeline = OVStableDiffusionPipeline.from_pretrained(
    save_torch_folder,
    device='CPU',
    export=True,
)
ov_pipeline.half()
ov_pipeline.save_pretrained(save_ov_folder)
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for yujiepan/dreamshaper-8-lcm-openvino-fp16

Finetuned
(599)
this model