Edit model card

UI-Diffuser-V2

UI-Diffuser-V2 is fine tuned from "stabilityai/stable-diffusion-2-base" with the SCapRepo dataset for mobile UI generation.

A demo using diffusion model and large language model for UI generation is available at https://github.com/Jl-wei/ai-gen-ui

Using with Diffusers

import torch
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler

model_id = "stabilityai/stable-diffusion-2-base"
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)

lora_path = "Jl-wei/ui-diffuser-v2"
pipe.load_lora_weights(lora_path)
pipe.to("cuda")

prompt = "Mobile app: health monitoring report"
images = pipe(prompt, num_inference_steps=30, guidance_scale=7.5, height=512, width=288, num_images_per_prompt=10).images

columns = 5
fig = plt.figure(figsize=(20,10))
for i, image in enumerate(images):
    plt.subplot(int(len(images) / columns), columns, i + 1)
    plt.imshow(image)
for ax in fig.axes:
    ax.axis("off")

Citation

If you find our work useful, please cite our paper:

@misc{wei2024aiinspired,
      title={On AI-Inspired UI-Design}, 
      author={Jialiang Wei and Anne-Lise Courbis and Thomas Lambolais and Gérard Dray and Walid Maalej},
      year={2024},
      eprint={2406.13631},
      archivePrefix={arXiv}
}

Please note that the code and model can only be used for academic purpose.

UI-Diffuser-V1

This model, UI-Diffuser-V2, represents the second version of the UI-Diffuser model.

The initial version, UI-Diffuser-V1, was introduced in our paper titled Boosting GUI Prototyping with Diffusion Models

Downloads last month
19
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Jl-wei/ui-diffuser-v2

Adapter
(5)
this model

Space using Jl-wei/ui-diffuser-v2 1