Image-to-Image
Diffusers
English
stazizov commited on
Commit
f4a9a8b
1 Parent(s): 93a0020

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -1
README.md CHANGED
@@ -9,4 +9,49 @@ base_model:
9
  new_version: XLabs-AI/flux-ip-adapter
10
  pipeline_tag: image-to-image
11
  library_name: diffusers
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  new_version: XLabs-AI/flux-ip-adapter
10
  pipeline_tag: image-to-image
11
  library_name: diffusers
12
+ ---
13
+
14
+ ![Banner Picture 1](assets/banner-dark.png?raw=true)
15
+ [<img src="https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/join-our-discord-rev1.png?raw=true">](https://discord.gg/FHY2guThfy)
16
+ ![Mona Anime Workflow 1](assets/mona_workflow.jpg?raw=true)
17
+
18
+ This repository provides a IP-Adapter checkpoint for
19
+ [FLUX.1-dev model](https://huggingface.co/black-forest-labs/FLUX.1-dev) by Black Forest Labs
20
+
21
+ [See our github](https://github.com/XLabs-AI/x-flux-comfyui) for comfy ui workflows.
22
+
23
+ # Models
24
+ IP-Adapter is trained on 512x512 resolution for 50k steps and 1024x1024 for 25k steps resolution and works for both 512x512 and 1024x1024 resolution. Model is training, we release new checkpoints regularly, stay updated.
25
+ We release **v1 version** - which can be used directly in ComfyUI!
26
+
27
+ Please, see our [ComfyUI custom nodes installation guide](https://github.com/XLabs-AI/x-flux-comfyui)
28
+
29
+ # Examples
30
+
31
+ See examples of our models results below.
32
+ Also, some generation results with input images are provided in "Files and versions"
33
+
34
+ # Inference
35
+
36
+ To try our models, you have 2 options:
37
+ 1. Use main.py from our [official repo](https://github.com/XLabs-AI/x-flux)
38
+ 2. Use our custom nodes for ComfyUI and test it with provided workflows (check out folder /workflows)
39
+
40
+ ## Instruction for ComfyUI
41
+ 1. Go to ComfyUI/custom_nodes
42
+ 2. Clone [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui.git), path should be ComfyUI/custom_nodes/x-flux-comfyui/*, where * is all the files in this repo
43
+ 3. Go to ComfyUI/custom_nodes/x-flux-comfyui/ and run python setup.py
44
+ 4. Update x-flux-comfy with `git pull` or reinstall it.
45
+ 5. Download Clip-L `model.safetensors` from [OpenAI VIT CLIP large](https://huggingface.co/openai/clip-vit-large-patch14), and put it to `ComfyUI/models/clip_vision/*`.
46
+ 6. Download our IPAdapter from [huggingface](https://huggingface.co/XLabs-AI/flux-ip-adapter/tree/main), and put it to `ComfyUI/models/xlabs/ipadapters/*`.
47
+ 7. Use `Flux Load IPAdapter` and `Apply Flux IPAdapter` nodes, choose right CLIP model and enjoy your genereations.
48
+ 8. You can find example workflow in folder workflows in this repo.
49
+
50
+ If you get bad results, try to set true_gs=2
51
+ ### Limitations
52
+ The IP Adapter is currently in beta.
53
+ We do not guarantee that you will get a good result right away, it may take more attempts to get a result.
54
+ ![Example Picture 2](assets/ip_adapter_example2.png?raw=true)
55
+ ![Example Picture 1](assets/ip_adapter_example1.png?raw=true)
56
+ ## License
57
+ Our weights fall under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License<br/>