Spaces:
Runtime error
Runtime error
Commit
Β·
24976a8
1
Parent(s):
10f7ab2
Clean up tabs
Browse files
app.py
CHANGED
@@ -401,7 +401,7 @@ with gr.Blocks() as noise_steps:
|
|
401 |
gr.Markdown('''
|
402 |
# π§βπ Noise Steps Loader
|
403 |
|
404 |
-
This tool
|
405 |
|
406 |
When it's built out though the plan is for it to let you expose the de-noising process that helps define Stable Diffusion image generation.
|
407 |
|
@@ -433,7 +433,7 @@ with gr.Blocks() as depth_map:
|
|
433 |
gr.Markdown('''
|
434 |
# π§βπ Depth Map Processor
|
435 |
|
436 |
-
This tool
|
437 |
|
438 |

|
439 |

|
@@ -450,7 +450,7 @@ with gr.Blocks() as control_net:
|
|
450 |
gr.Markdown('''
|
451 |
# π§βπ Control Net Processor
|
452 |
|
453 |
-
This tool
|
454 |
''')
|
455 |
|
456 |
|
@@ -464,7 +464,9 @@ with gr.Blocks() as inversion:
|
|
464 |
gr.Markdown('''
|
465 |
# π§βπ Concept Trainer
|
466 |
|
467 |
-
|
|
|
|
|
468 |
|
469 |
For now the tool lives on Google Colab, which is Google's free tool for using their GPU's. Someday it might live here on our Hugging Face Space, but the process is a little too demanding for our current resources. To train your own concept visit the link below and follow the instructions and be prepared to wait several hours.
|
470 |
|
@@ -485,7 +487,7 @@ with gr.Blocks() as dream_booth:
|
|
485 |
gr.Markdown('''
|
486 |
# π§βπ Dream Booth Concept Trainer
|
487 |
|
488 |
-
This tool
|
489 |
|
490 |
[dream booth](https://huggingface.co/spaces/multimodalart/dreambooth-training)
|
491 |
''')
|
@@ -496,5 +498,6 @@ with gr.Blocks() as dream_booth:
|
|
496 |
# -----------------------------------------------------------------------------------------------
|
497 |
|
498 |
|
499 |
-
tabbed_interface = gr.TabbedInterface([new_welcome, dropdown_tab, beta, inversion, noise_steps, depth_map, control_net, dream_booth], ["Welcome!", "Advanced Prompting", "Beta Concepts", "Concept Trainer", "Noise Steps", "Depth Map", "Control Net", "Dream Booth"])
|
|
|
500 |
tabbed_interface.launch()
|
|
|
401 |
gr.Markdown('''
|
402 |
# π§βπ Noise Steps Loader
|
403 |
|
404 |
+
This tool doesn't exist yet!
|
405 |
|
406 |
When it's built out though the plan is for it to let you expose the de-noising process that helps define Stable Diffusion image generation.
|
407 |
|
|
|
433 |
gr.Markdown('''
|
434 |
# π§βπ Depth Map Processor
|
435 |
|
436 |
+
This tool doesn't exist yet! When it's built it will let you input any image from your phone or computer and process it into a depth map image using a Stable Diffusion control net process. Hopefully it'll be working soon. Check out the example below for now!
|
437 |
|
438 |

|
439 |

|
|
|
450 |
gr.Markdown('''
|
451 |
# π§βπ Control Net Processor
|
452 |
|
453 |
+
This tool doesn't exist yet! When it's built it will let you input any image from your phone or computer and process it using any text prompt or combination of trained artist styles / concepts. If you want to play with normal control net without ahx artist concepts check out the link below. [control net](https://huggingface.co/spaces/hysts/ControlNet)
|
454 |
''')
|
455 |
|
456 |
|
|
|
464 |
gr.Markdown('''
|
465 |
# π§βπ Concept Trainer
|
466 |
|
467 |
+
[textual inversion training](https://textual-inversion.github.io/static/images/training/training.JPG)
|
468 |
+
|
469 |
+
This external tool lets you train your own new models / concepts from any images you want that will appear automatically be added to the Beta Concepts and Advanced Prompting tabs!
|
470 |
|
471 |
For now the tool lives on Google Colab, which is Google's free tool for using their GPU's. Someday it might live here on our Hugging Face Space, but the process is a little too demanding for our current resources. To train your own concept visit the link below and follow the instructions and be prepared to wait several hours.
|
472 |
|
|
|
487 |
gr.Markdown('''
|
488 |
# π§βπ Dream Booth Concept Trainer
|
489 |
|
490 |
+
This tool doesn't exist yet! When it's built it will let you train concepts using a process distinct from our current concept training tool which uses textual inversion training. To read more about Dream Booth check out the link below!
|
491 |
|
492 |
[dream booth](https://huggingface.co/spaces/multimodalart/dreambooth-training)
|
493 |
''')
|
|
|
498 |
# -----------------------------------------------------------------------------------------------
|
499 |
|
500 |
|
501 |
+
# tabbed_interface = gr.TabbedInterface([new_welcome, dropdown_tab, beta, inversion, noise_steps, depth_map, control_net, dream_booth], ["Welcome!", "Advanced Prompting", "Beta Concepts", "Concept Trainer", "Noise Steps", "Depth Map", "Control Net", "Dream Booth"])
|
502 |
+
tabbed_interface = gr.TabbedInterface([new_welcome, dropdown_tab, beta, inversion, noise_steps, control_net], ["Welcome!", "Advanced Prompting", "Beta Concepts", "Concept Trainer", "Noise Steps", "Depth Map", "Control Net", "Dream Booth"])
|
503 |
tabbed_interface.launch()
|