Jlonge4 commited on
Commit
5e03147
·
verified ·
1 Parent(s): 6186112

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -143
README.md CHANGED
@@ -2,197 +2,117 @@
2
  library_name: diffusers
3
  ---
4
 
5
- # Model Card for Model ID
6
-
7
- <!-- Provide a quick summary of what the model is/does. -->
8
-
9
 
 
10
 
11
  ## Model Details
12
 
13
- ### Model Description
14
 
15
- <!-- Provide a longer summary of what this model is. -->
16
 
17
- This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
18
 
19
- - **Developed by:** [More Information Needed]
20
- - **Funded by [optional]:** [More Information Needed]
21
- - **Shared by [optional]:** [More Information Needed]
22
- - **Model type:** [More Information Needed]
23
- - **Language(s) (NLP):** [More Information Needed]
24
- - **License:** [More Information Needed]
25
- - **Finetuned from model [optional]:** [More Information Needed]
26
 
27
- ### Model Sources [optional]
28
 
29
- <!-- Provide the basic links for the model. -->
30
-
31
- - **Repository:** [More Information Needed]
32
- - **Paper [optional]:** [More Information Needed]
33
- - **Demo [optional]:** [More Information Needed]
34
 
35
  ## Uses
36
 
37
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
-
39
  ### Direct Use
40
 
41
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
42
 
43
- [More Information Needed]
 
 
 
44
 
45
- ### Downstream Use [optional]
46
 
47
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
48
-
49
- [More Information Needed]
50
 
51
  ### Out-of-Scope Use
52
 
53
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
54
-
55
- [More Information Needed]
56
-
57
- ## Bias, Risks, and Limitations
58
-
59
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
-
61
- [More Information Needed]
62
 
63
  ### Recommendations
64
 
65
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
66
-
67
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
68
-
69
- ## How to Get Started with the Model
70
-
71
- Use the code below to get started with the model.
72
-
73
- [More Information Needed]
74
-
75
- ## Training Details
76
-
77
- ### Training Data
78
-
79
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
80
-
81
- [More Information Needed]
82
-
83
- ### Training Procedure
84
-
85
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
86
-
87
- #### Preprocessing [optional]
88
-
89
- [More Information Needed]
90
-
91
-
92
- #### Training Hyperparameters
93
-
94
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
95
-
96
- #### Speeds, Sizes, Times [optional]
97
-
98
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
99
-
100
- [More Information Needed]
101
 
102
- ## Evaluation
103
 
104
- <!-- This section describes the evaluation protocols and provides the results. -->
105
 
106
- ### Testing Data, Factors & Metrics
 
 
107
 
108
- #### Testing Data
109
 
110
- <!-- This should link to a Dataset Card if possible. -->
111
 
112
- [More Information Needed]
 
 
 
113
 
114
  #### Factors
115
 
116
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
 
 
 
117
 
118
- [More Information Needed]
119
-
120
- #### Metrics
121
-
122
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
123
-
124
- [More Information Needed]
125
-
126
- ### Results
127
-
128
- [More Information Needed]
129
-
130
- #### Summary
131
-
132
-
133
-
134
- ## Model Examination [optional]
135
-
136
- <!-- Relevant interpretability work for the model goes here -->
137
-
138
- [More Information Needed]
139
-
140
- ## Environmental Impact
141
-
142
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
143
-
144
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
145
-
146
- - **Hardware Type:** [More Information Needed]
147
- - **Hours used:** [More Information Needed]
148
- - **Cloud Provider:** [More Information Needed]
149
- - **Compute Region:** [More Information Needed]
150
- - **Carbon Emitted:** [More Information Needed]
151
-
152
- ## Technical Specifications [optional]
153
 
154
  ### Model Architecture and Objective
155
 
156
- [More Information Needed]
157
 
158
- ### Compute Infrastructure
 
 
159
 
160
- [More Information Needed]
161
 
162
  #### Hardware
163
 
164
- [More Information Needed]
 
165
 
166
  #### Software
167
 
168
- [More Information Needed]
169
-
170
- ## Citation [optional]
171
-
172
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
173
-
174
- **BibTeX:**
175
-
176
- [More Information Needed]
177
-
178
- **APA:**
179
-
180
- [More Information Needed]
181
-
182
- ## Glossary [optional]
183
-
184
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
185
-
186
- [More Information Needed]
187
 
188
- ## More Information [optional]
189
 
190
- [More Information Needed]
191
 
192
- ## Model Card Authors [optional]
 
 
 
 
193
 
194
- [More Information Needed]
195
 
196
  ## Model Card Contact
197
 
198
- [More Information Needed]
 
2
  library_name: diffusers
3
  ---
4
 
5
+ # Model Card for Flux Dev FP8 Pipeline
 
 
 
6
 
7
+ This diffusers pipeline is a complete implementation of the Flux Dev 1 model optimized for FP8 precision, specifically designed for deployment on AWS SageMaker for integration with ComfyUI through a custom node. Unlike a model-only repository, this contains the full pipeline architecture required for proper deployment.
8
 
9
  ## Model Details
10
 
11
+ ### Pipeline Description
12
 
13
+ This repository contains a complete diffusers pipeline implementation utilizing the Flux Dev 1 model optimized for FP8 precision. Unlike model-only repositories, this provides the entire pipeline architecture required for text-to-image generation, properly structured for SageMaker deployment and ComfyUI integration.
14
 
15
+ The pipeline was created to address the lack of publicly available FP8 Flux Dev implementations suitable for deployment, as standard approaches to model loading create symlink issues that prevent proper SageMaker packaging.
16
 
17
+ - **Developed by:** Jlonge4
18
+ - **Pipeline type:** Complete Text-to-Image Diffusion Pipeline (FP8 Optimized)
19
+ - **Model component:** Flux Dev 1 quantized to FP8
20
+ - **License:** Same as original Flux Dev 1 model
21
+ - **Base model:** [Comfy-Org/flux1-dev](https://huggingface.co/Comfy-Org/flux1-dev)
 
 
22
 
23
+ ### Model Sources
24
 
25
+ - **Pipeline:** [Jlonge4/flux-dev-fp8](https://huggingface.co/Jlonge4/flux-dev-fp8)
26
+ - **Original Model:** [Comfy-Org/flux1-dev](https://huggingface.co/Comfy-Org/flux1-dev)
 
 
 
27
 
28
  ## Uses
29
 
 
 
30
  ### Direct Use
31
 
32
+ This model is designed to be deployed as a SageMaker endpoint and accessed through a custom ComfyUI node. It enables text-to-image generation directly within ComfyUI workflows, leveraging the performance benefits of FP8 precision and AWS's infrastructure.
33
 
34
+ Primary use cases include:
35
+ - Text-to-image generation within ComfyUI
36
+ - Faster inference for Flux Dev 1 model
37
+ - Integration with existing ComfyUI workflows
38
 
39
+ ### Downstream Use
40
 
41
+ The model can be integrated into larger creative pipelines within ComfyUI, serving as the text-to-image generation component that can be combined with other image processing and enhancement nodes.
 
 
42
 
43
  ### Out-of-Scope Use
44
 
45
+ This model is not designed for:
46
+ - Deployment outside of AWS SageMaker
47
+ - Use cases requiring precision higher than FP8
48
+ - Applications requiring inference time longer than SageMaker's 60-second timeout
 
 
 
 
 
49
 
50
  ### Recommendations
51
 
52
+ - Deploy on recommended g5.8xlarge instances for optimal performance
53
+ - Design prompts considering the 60-second inference timeout limitation
54
+ - Monitor inference performance and quality to identify any precision-related issues
55
+ - Follow best practices for prompt engineering with Flux models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
+ ## How to Get Started with the Pipeline
58
 
59
+ ### Deployment on SageMaker
60
 
61
+ 1. Clone the repository: [ComfyUI Sagemaker Node](https://github.com/jlonge4/custom_comfy_ui/tree/main)
62
+ 2. Run the `deploy_flux_dev-pipe.ipynb` notebook to deploy the model to SageMaker
63
+ 3. Use a g5.8xlarge instance for deployment (recommended)
64
 
65
+ ### Integration with ComfyUI
66
 
67
+ ComfyUI Sagemaker Node [repo](https://github.com/jlonge4/custom_comfy_ui/tree/main)
68
 
69
+ 1. Install the SageMaker custom node in your ComfyUI environment
70
+ 2. Find the "Text2Image" node in the node browser
71
+ 3. Connect the node to your workflow, providing a text prompt and optional parameters
72
+ 4. The node will communicate with your SageMaker endpoint to generate images
73
 
74
  #### Factors
75
 
76
+ - Generation quality compared to original model
77
+ - Inference speed
78
+ - Memory usage
79
+ - Deployment reliability on SageMaker
80
 
81
+ ## Technical Specifications
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
  ### Model Architecture and Objective
84
 
85
+ This model uses the same architecture as Flux Dev 1 but operates at FP8 precision. It addresses specific technical challenges:
86
 
87
+ 1. Standard HuggingFace diffusers `from_single_file` download creates symlinks that render the `model.tar.gz` unusable for SageMaker deployment
88
+ 2. A complete custom pipeline was created to properly package the model for SageMaker
89
+ 3. Special handling for FP8 precision throughout the pipeline ensures optimal performance
90
 
91
+ ### Compute Infrastructure
92
 
93
  #### Hardware
94
 
95
+ - Recommended: AWS g5.8xlarge GPU instance
96
+ - Minimum: GPU with 16GB+ VRAM supporting FP8 operations
97
 
98
  #### Software
99
 
100
+ - AWS SageMaker
101
+ - HuggingFace Diffusers
102
+ - ComfyUI with custom SageMaker node
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
 
104
+ ## More Information
105
 
106
+ This complete pipeline implementation was specifically created to address the challenges of deploying Flux Dev 1 to SageMaker in FP8 format for ComfyUI integration.
107
 
108
+ Key innovations include:
109
+ 1. Full pipeline structure with all necessary components properly organized for SageMaker deployment
110
+ 2. Avoids symlink issues that would typically occur when using HuggingFace's `from_single_file` download option
111
+ 3. Proper packaging methodology that ensures the `model.tar.gz` works correctly when deployed as an endpoint
112
+ 4. Complete integration with SageMaker endpoint infrastructure through the ComfyUI custom node
113
 
114
+ For updates and improvements, check the repository at [Jlonge4/flux-dev-fp8](https://huggingface.co/Jlonge4/flux-dev-fp8).
115
 
116
  ## Model Card Contact
117
 
118
+ For questions or issues regarding this model, please open an issue on the HuggingFace repository.