You got a sub out of me! This is an excellent outpainting workflow - one of the better ones I've seen. Might be a little tricky to setup for the amateur/novice (might want to put a little warning for people) - but I've been working with ComfyUI for well over a year now, and it only took a couple minutes of directing it to the proper models - I also had to change my ComfyUI/temp folder permissions as I'm running on WSL2 over network.
@discotek1198Ай бұрын
the best workflow i have seen yet. Perfect, thank you very much!!! it works floawlesly!!! SUB+
@my-ai-force24 күн бұрын
Thanks for the sub!
@abaj006Ай бұрын
Brilliant work, really amazing. Thanks for the tutorial and sharing the workflow. Just tried it and works really well.
@my-ai-forceАй бұрын
Great to hear!
@cabinator1Ай бұрын
Fantastic Workflow. Works flawless for me.
@my-ai-forceАй бұрын
Great to hear!
@M8cool6 сағат бұрын
out of curiosity, what graphics card and how much VRAM do you use to run this workflow?
@wellshotproductions65412 ай бұрын
Awesome workflow and great video! Found it over on OpenArt, then made my way here! Keep it up. Subscribed!
@my-ai-force2 ай бұрын
Awesome, thank you!
@TailspinMediaАй бұрын
very cool and love how organized it is.
@my-ai-forceАй бұрын
Thank you!
@philippeheritier93642 ай бұрын
It works very very well, a very big thank you for this brilliant tutorial
@my-ai-force2 ай бұрын
Glad it helped
@dameguy_902 ай бұрын
You are a genius. My subscription is worth it.
@my-ai-force2 ай бұрын
Thanks a ton for your support.
@CasasYLaPistola2 ай бұрын
Thanks for the video and the workflow, I've used it and everything works fine until it reaches the Flux group, and the problem is that I don't know in which directory I should copy the flux model, because if I understood correctly, the flux models don't go in the same directory as the 1.5 and SDXL checkpoints. In short, when I copy them there the workflow gives me an error. Can you tell me where to put it? Also, until now to use flux models I didn't use the usual "load checkpoint" node, instead I used the "dualclip loader" node.
@baheth3elmy162 ай бұрын
I loaded a Diffusion Model node and used the flux model from there but I got an error with the VAE Encoder. The model to use is the 17gb one not the diffusion model of 11gb referred to here.
@user-fo9ce3hr5h2 ай бұрын
@@baheth3elmy16 bro what is the clip file for download? i haven't clip file for flux1-dev-fp8.safetensors.
@wiwwiw2890Ай бұрын
Мне тоже интересно
@Macieks3002 ай бұрын
Thanks so much for this workflow.
@my-ai-force2 ай бұрын
Glad it was helpful!
@97BuckeyeGuy2 ай бұрын
Great workflow! Thank you
@my-ai-force2 ай бұрын
You're so welcome!
@gardentv78332 ай бұрын
after many models re-downloads, its work, thank you, 2 days to figure out
@ritikagrawal84542 ай бұрын
I was able to download all nodes and models, but my comfyui is not loading only. Did you face similar issue, if not, still can you please tell what worked for you?
@WasamiKirua2 ай бұрын
thank you very much, great workflow
@my-ai-force2 ай бұрын
Glad you like it!
@happyme70552 ай бұрын
Stunning!!!!! First working Outpaint ever ;-) GJ! 2 things would be usefu, i guess... a negative prompt and an optional lineart controlnet implemenation...
@kasolegАй бұрын
I've also been looking for a long time for something that actually works. I've finalized it and posted my version on Google Drive above. Take it if you want.
@mcdigitalargentina2 ай бұрын
Amigo gran trabajo! Suscripto a tu canal. Gracias por compartir tu trabajo.
@GrocksterRox2 ай бұрын
Very well thought out. Kudos!
@my-ai-force2 ай бұрын
Thanks for your kind words.
@aidanblah9646Ай бұрын
Flux_Repaint - Load Checkpoint "CheckpointLoaderSimpleERROR: Could not detect model type of: E:\AI\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\checkpoints\FLUX\flux1-dev-fp8.safetensors". All my other workflows can find the Dev-fp8 in the unet folder. But I went ahead and copied it and put it in the Checkpoint folder, in a FLUX/ folder like you have it. I even selected it the "Load checkpoint" node. I still get that error. Please help
@my-ai-forceАй бұрын
Maybe the file has been corrupted. Try redownload it.
@clfloverАй бұрын
thank you. The flux section produces a picture than what is in the Restore detail section.. can you help?
@my-ai-forceАй бұрын
Great question! I can see where the confusion might be. The idea behind flux image-to-image repainting is to enhance and diversify the output, so it does make sense to expect some differences compared to what SDXL generates on its own. The goal is to optimize SDXL by leveraging these differences to create even more unique and creative results. If you have any specific examples or ideas in mind, I'd love to discuss them further!
@Henry-xsАй бұрын
This is really nice, but could you please tell me if in the third panel “Get_pad_mask”, “Get_sdxl_img”, “Get_pad_img ”, can I import these three images directly and replace them?
@my-ai-force24 күн бұрын
Yes, we can.
@AlexMihaiCАй бұрын
I get this error and I don´t see were to put the clip for the flux model: VAEEncode 'NoneType' object is not subscriptable
@AlexMihaiCАй бұрын
I happens if I don´t activate de Restore Detail part, now it works
@baheth3elmy162 ай бұрын
(SOLVED) Hi. Your Workflow has a problem with the Flux group number 4. The VAE Encoder returns an error "'NoneType' object is not subscriptable" I used both 17GB flux and 11GB flux. Can you please tell us what the problem might be? Edit: Problem solved: The problem was that I disabled the optional groups because I thought I would save VRAM. When I enabled them, the workflow worked.
@mohammadbaranteem34872 ай бұрын
hello my friend I am an Iranian and I do not have enough control. My problem is that it only works until the flux stage. You managed to solve the problem and I don't understand your advice. Can you explain with a photo?
@baheth3elmy162 ай бұрын
@@mohammadbaranteem3487 The workflow is divided into four groups. Groups 2 and 4 are optional and usually you can disable them. But if you will use Group 3 which is the Flux enhancement over the SDXL then you must enable group 2. Otherwise, the Flux group and entire workflow won't work.
@ChrissyAivenАй бұрын
Sizes are not really big, is it possible to use higher ratios like 1080x1920 for Reels?
@my-ai-forceАй бұрын
We can use supir or Topaz for upscaling
@lukehancockvideo2 ай бұрын
Where do the images output to? They are not appearing in my ComfyUI Output folder.
@my-ai-force2 ай бұрын
You can replace the ‘Preview Image’ node with a ‘Save Image’ node and the image will be saved.
@莊惠雯-t5gАй бұрын
Thanks you. I use all of your workflow and model but why my comfy ui always show: CheckpointLoaderSimple ERROR: Could not detect model type of: D:\ComfyUI-aki-v1.3\ComfyUI-aki-v1.3\models\checkpoints\flux-dev\flux1-dev-fp8-e5m2.safetensors
@my-ai-forceАй бұрын
Instead of Checkpoint Loader node, try using Load Diffusion Model to load Flux model.
@Lord5othАй бұрын
Cool ! this bricked my comfy build thanks!!
@Atreyuwu13 күн бұрын
It's up to you to check all necessary custom_nodes and make sure nothing clashes with anything else you might have; NOT the author of the workflow! Take some responsibility for yourself. It's working fine here!
@EkkivokАй бұрын
This workflow is great but there is a problem..... the node for the prompts is set with Florence wich is doing an automatisation of the prompt without any control on it, For exemple, i have problem for a photo wich shows human but i want the latent empty side of the image that i want to outpaint generate the background with no humans... And here is the problem Florence describe the entire image including...Humans on it and then out paint the image with humans (the suff that i don't want....) then.... My question is that one, is there a workflow or can you make one that can restore the control of the prompt without florence ?
@my-ai-forceАй бұрын
I think you're referring to the Flux ControlNet Upscaler workflow! To address your issue, just use a text node to connect to the 'CLIP Text Encoder' instead of the 'Florence2Run.'
@วรายุทธชะชํา2 ай бұрын
I want to generate multiple sizes in one round. How can I do that? Sir
@JulioLlanosSuarez2 ай бұрын
+1
@johannesmuller78812 ай бұрын
Thanks alot for ur work, but I got one general Question does it make sense to use Comfy Ui with flux on my GTX 1070? right now I am downloading all stuff an just wanna set it up running but is it worth it?
@my-ai-forceАй бұрын
You might want to give the GGUF version of the Flux model a try!
@AmateurDrummerBG2 ай бұрын
Hey, the workflow is very cool but when I get to the FLUX part, specifically the KSampler, it gets reaaally slow to render. I'm using RTX 3060 with 12 Gb VRAM. Does abyone know how to speed it up?
@my-ai-forceАй бұрын
Consider trying out Flux GGUF or Flux Hyper LoRA for your project!
2 ай бұрын
This is wonderfully good work, thank you for sharing! One question, I can only guess where to place the initial image with the x y parameters. Is there a better way to do this? Anyway, great!
@kasolegАй бұрын
yes, I had to tinker with the settings to understand how to add. but in about 30 minutes you'll figure it out by trial and error...)
@manipayami2942 ай бұрын
When I use GGUF loader the app crashes. Does anyone know how should I fix this problem?
@DarioToledo2 ай бұрын
I didn't know of that Union repaint controlnet. What does that do?
@my-ai-force2 ай бұрын
It's used for inpainting.
@DarioToledo2 ай бұрын
@@my-ai-force and what difference does it make to usual inpainting without a controlnet? Tried to run it but gave me errors.
@maxmad62tubeАй бұрын
I'm sorry but I'm getting an error message “4-bit quantization data type None is not implemented.” Can you help me?
@my-ai-forceАй бұрын
Thanks for reaching out! To better assist you with this error, could you please share a bit more detail? A screenshot of the terminal or any additional context about the error would be really helpful. This way, I can understand the issue more clearly and provide you with the best support possible!
@deonix952 ай бұрын
Error occurred when executing CheckpointLoaderSimple: ERROR: Could not detect model type of: D:\Programs\SD\models/Stable-diffusion\flux1-dev-fp8.safetensors File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI odes.py", line 539, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 527, in load_checkpoint_guess_config raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))
@henroc4812 ай бұрын
same here
@Cluster5020Ай бұрын
@@henroc481 "aderek Flux v2" worked for me
@manipayami2942 ай бұрын
Can you do it with Flux GGUF verisons?
@my-ai-force2 ай бұрын
In theory, yes.
@digitalface90552 ай бұрын
missing nodes crashed my comfyui, won't start anymore
@MatthewWaltersHello2 ай бұрын
I find it makes the eyes look like googly-eyes. How to fix?
@baheth3elmy162 ай бұрын
The flux model in your description is the wrong model. It is the 11gb model and it won't work in your workflow.
@Cluster5020Ай бұрын
any other flux1-dev (e.g. the bnb) will do it as well?
@Cluster5020Ай бұрын
nevermind, "aderek Flux v2" is working :)
@maelstromvideo092 ай бұрын
try differensial diffusion it make inpainting better, without most of this ass pain.
@spelgenoegen70012 ай бұрын
Awesome! Everything works perfectly with diffusion_pytorch-model_promax.safetensors. Thanks!