Update README.md
Browse files
README.md
CHANGED
@@ -70,7 +70,7 @@ widget:
|
|
70 |
### **review**
|
71 |
- `q2_k` gguf is super fast but not usable; keep it for testing only
|
72 |
- surprisingly `0.9_fp8_e4m3fn` and `0.9-vae_fp8_e4m3fn` are working pretty good
|
73 |
-
- mix-and-match possible; you could mix up using the vae(s) available with different model file(s) here; test which combination works
|
74 |
- **gguf-node** is available (see details [here](https://github.com/calcuis/gguf)) for running the new features (the point below might not be directly related to the model)
|
75 |
- you are able to make your own `fp8_e4m3fn` scaled safetensors and/or convert it to **gguf** with the new node via comfyui
|
76 |
|
|
|
70 |
### **review**
|
71 |
- `q2_k` gguf is super fast but not usable; keep it for testing only
|
72 |
- surprisingly `0.9_fp8_e4m3fn` and `0.9-vae_fp8_e4m3fn` are working pretty good
|
73 |
+
- mix-and-match possible; you could mix up using the vae(s) available with different model file(s) here; test which combination works best
|
74 |
- **gguf-node** is available (see details [here](https://github.com/calcuis/gguf)) for running the new features (the point below might not be directly related to the model)
|
75 |
- you are able to make your own `fp8_e4m3fn` scaled safetensors and/or convert it to **gguf** with the new node via comfyui
|
76 |
|