thomaseding
commited on
Commit
•
a111ac4
1
Parent(s):
b165a6f
Add some sample output images.
Browse files- .vscode/settings.json +1 -0
- README.md +5 -5
- example-outputs/20230703072154-ac847439-1971242433-159.png +0 -0
- example-outputs/20230703080528-2cd97aa9-281969310-81.png +0 -0
- example-outputs/20230703083247-03a86837-2016245641-162.png +0 -0
- example-outputs/20230703083502-89f714b7-2908299568-164.png +0 -0
- example-outputs/20230703084403-676f178a-2876279387-179.png +0 -0
- example-outputs/20230703090906-d25393c4-975451197-382.png +0 -0
- example-outputs/20230703091138-52a6c75e-1803940919-416.png +0 -0
- example-outputs/20230703091940-d7d11138-2383291623-524.png +0 -0
- example-outputs/20230703095949-1d6b459b-4204982160-992.png +0 -0
- example-outputs/20230703095954-9b71d84a-2379864561-993.png +0 -0
- example-outputs/20230703095959-4493838f-3859779054-994.png +0 -0
- example-outputs/20230703100231-9418c58f-917667143-1028.png +0 -0
- example-outputs/20230703102437-64a98cdc-3566720748-1259.png +0 -0
- example-outputs/20230703110732-89f8699a-4159171053-1505.png +0 -0
- example-outputs/20230703110853-53a881df-1267811582-1513.png +0 -0
- example-outputs/20230703111229-4200383b-4066916164-1526.png +0 -0
.vscode/settings.json
CHANGED
@@ -2,6 +2,7 @@
|
|
2 |
"cSpell.words": [
|
3 |
"controlnet",
|
4 |
"creativeml",
|
|
|
5 |
"loras",
|
6 |
"openrail",
|
7 |
"safetensors",
|
|
|
2 |
"cSpell.words": [
|
3 |
"controlnet",
|
4 |
"creativeml",
|
5 |
+
"Eding",
|
6 |
"loras",
|
7 |
"openrail",
|
8 |
"safetensors",
|
README.md
CHANGED
@@ -2,15 +2,15 @@
|
|
2 |
license: creativeml-openrail-m
|
3 |
---
|
4 |
|
5 |
-
#
|
6 |
|
7 |
-
### About
|
8 |
|
9 |
-
|
10 |
|
11 |
This is currently an experimental proof of concept. I trained this using on around 2000 generated pixel-art/pixelated images that I generated using Stable Diffusion (with a lot of cleanup and manual curation). The model is not very good, but it does work on grid sizes of about a max of 64 checker "pixels" for square generations. I did find that using 128x64 pattern still seemed to work moderately well for a 1024x512 image.
|
12 |
|
13 |
-
### Usage
|
14 |
|
15 |
To install, copy the `.safetensors` and `.yaml` files to your Automatic1111 ControlNet extension's model directory like (e.g. `sd-webui-controlnet/models`)
|
16 |
|
@@ -18,7 +18,7 @@ There is no preprocessor. Instead, supply a black and white checkerboard image a
|
|
18 |
|
19 |
The script `gen_checker.py` can be used to generate checkerboard images of arbitrary sizes.
|
20 |
|
21 |
-
### FAQ
|
22 |
|
23 |
Q: Why is this needed? Can't I use a post-processor to downscale the image?
|
24 |
A: From my experience SD has a hard time creating genuine pixel art (even with dedicated base models and loras), where it has a mismatch of pixel sizes, smooth curves, etc. What appears to be a straight line at a glance, might bend around. This can cause post-processors to create artifacts based on quantization rounding a pixel to a position one pixel off in some direction. This model is intended to fix that.
|
|
|
2 |
license: creativeml-openrail-m
|
3 |
---
|
4 |
|
5 |
+
# PixelNet (Thomas Eding)
|
6 |
|
7 |
+
### About:
|
8 |
|
9 |
+
PixelNet is a ControlNet model for Stable Diffusion. It takes a checkerboard image as input, which is used to control where logical pixels are to be placed.
|
10 |
|
11 |
This is currently an experimental proof of concept. I trained this using on around 2000 generated pixel-art/pixelated images that I generated using Stable Diffusion (with a lot of cleanup and manual curation). The model is not very good, but it does work on grid sizes of about a max of 64 checker "pixels" for square generations. I did find that using 128x64 pattern still seemed to work moderately well for a 1024x512 image.
|
12 |
|
13 |
+
### Usage:
|
14 |
|
15 |
To install, copy the `.safetensors` and `.yaml` files to your Automatic1111 ControlNet extension's model directory like (e.g. `sd-webui-controlnet/models`)
|
16 |
|
|
|
18 |
|
19 |
The script `gen_checker.py` can be used to generate checkerboard images of arbitrary sizes.
|
20 |
|
21 |
+
### FAQ:
|
22 |
|
23 |
Q: Why is this needed? Can't I use a post-processor to downscale the image?
|
24 |
A: From my experience SD has a hard time creating genuine pixel art (even with dedicated base models and loras), where it has a mismatch of pixel sizes, smooth curves, etc. What appears to be a straight line at a glance, might bend around. This can cause post-processors to create artifacts based on quantization rounding a pixel to a position one pixel off in some direction. This model is intended to fix that.
|
example-outputs/20230703072154-ac847439-1971242433-159.png
ADDED
example-outputs/20230703080528-2cd97aa9-281969310-81.png
ADDED
example-outputs/20230703083247-03a86837-2016245641-162.png
ADDED
example-outputs/20230703083502-89f714b7-2908299568-164.png
ADDED
example-outputs/20230703084403-676f178a-2876279387-179.png
ADDED
example-outputs/20230703090906-d25393c4-975451197-382.png
ADDED
example-outputs/20230703091138-52a6c75e-1803940919-416.png
ADDED
example-outputs/20230703091940-d7d11138-2383291623-524.png
ADDED
example-outputs/20230703095949-1d6b459b-4204982160-992.png
ADDED
example-outputs/20230703095954-9b71d84a-2379864561-993.png
ADDED
example-outputs/20230703095959-4493838f-3859779054-994.png
ADDED
example-outputs/20230703100231-9418c58f-917667143-1028.png
ADDED
example-outputs/20230703102437-64a98cdc-3566720748-1259.png
ADDED
example-outputs/20230703110732-89f8699a-4159171053-1505.png
ADDED
example-outputs/20230703110853-53a881df-1267811582-1513.png
ADDED
example-outputs/20230703111229-4200383b-4066916164-1526.png
ADDED