ComfyUI FLUX - Super Simple Workflow
5:54
Пікірлер
@AInfectados
@AInfectados 4 сағат бұрын
How i use GGUF Models?
@AInfectados
@AInfectados 5 сағат бұрын
Missing Nodes: -Anything Everywhere -ImpactGaussianBlurMask -LoadAndResizeImage And can't find them in Manager.
@valorantacemiyimben
@valorantacemiyimben 6 сағат бұрын
Hello. I have some workflows to be used in ComfyUI. When I select the workflows, I can't get them to work properly due to some missing models, LoRAs, etc. For someone who knows how to do this, it's very simple, but I haven't been able to get it right. I'm looking for someone who can download the models, LoRAs, and other elements used in the workflow, make sure everything works smoothly, and then send the workflow back to me. The models, LoRAs, etc., used in the workflow have already been provided by the workflow owner, but I just couldn't manage to do it myself. As I said, what I need is for the workflow to work correctly when I open it in ComfyUI. I will share examples with those who submit offers.
@michaelchu6516
@michaelchu6516 3 сағат бұрын
Emailed you. i'll help you out man.
@ningli2896
@ningli2896 10 сағат бұрын
The llama folder did not appear
@weblink0912
@weblink0912 12 сағат бұрын
我看到好多.. Live Portrait 的node , 都不知道要用那個好了.
@Geffers58
@Geffers58 13 сағат бұрын
amazing, it works. Worth pointing out you need ffmpeg and a environment path set in path to where it lies,. then you can make videos rather than anim gifs. But is there a way of the faces not getting all messed up?
@steve-g3j6b
@steve-g3j6b 15 сағат бұрын
I have no idea what this is or how to use comfyui, but im gonna learn it if this is what it can do. I dot many low quality pics that I need to upscale (upscale that can add details) I even have Topaz, but i dont like the results.
@antoniopepe
@antoniopepe 15 сағат бұрын
if i dont want any camera movent how i can archieve ?
@syntartica
@syntartica 17 сағат бұрын
Hi @CG TOP TIPS, when I start the workflow I get this error: ExpressionEditor DLL load failed while importing aggregations: The specified module was not found. # ComfyUI Error Report ## Error Details - **Node Type:** ExpressionEditor - **Exception Type:** ImportError - **Exception Message:** DLL load failed while importing aggregations: The specified module was not found. Do you know how to fix the error? everything is installed correctly, even the "Expression Editor (PHM)" module is visible in the stream, but when I start it the error comes out. :( Thanks!
@isaacslayer
@isaacslayer 19 сағат бұрын
Good video friend, I have 2 questions since I am an automatic 1111 user and I wanted to ask you if you can use the ControlNet reference model here?? and 2 would be that it is used for face repair in Comfyui ? since in automatic I use adetailer and I don't know what is used here
@1Dante876
@1Dante876 21 сағат бұрын
How do i use my own image and change the pose? Or i have to have more own trained lora or something?
@ufuktulga
@ufuktulga 23 сағат бұрын
why you just put the links at info box, i am new on it and trying to digg by myself
@ShubzGhuman
@ShubzGhuman 21 сағат бұрын
so sorry for that
@DesoloZantas
@DesoloZantas Күн бұрын
This doesn't look easy lol
@mihaicatalind
@mihaicatalind Күн бұрын
OMG, 20 minutes for 20 steps at 1024.... and I was sad that i render 1024 in 8 seconds...
@BibhatsuKuiri
@BibhatsuKuiri Күн бұрын
hi can you create a great workflow ( using minimum nodes) for the best adobe firefly alternative ? bascially editing image with prompt and mask using FLUX. ? that would be a great video idea too
@crypt_exe8709
@crypt_exe8709 Күн бұрын
Always choose .safetensors over .bin, .safetensors are both more speed-efficient and free from security risks compared to .bin files!
@anthrax0pranav
@anthrax0pranav Күн бұрын
Can this be used for adding detail to an image after I have removed an object? I can see a use case there, where I can remove objects and then fill in with SDXL inpainting, but the results are far from good.
@user-wi5vj1qv4w
@user-wi5vj1qv4w Күн бұрын
cant work..- **Node Type:** ControlNetLoader - **Exception Type:** _pickle.UnpicklingError - **Exception Message:** invalid load key, '\xbc'.
@eveekiviblog7361
@eveekiviblog7361 Күн бұрын
action is not allowed with this security level of configuration
@thanatosor
@thanatosor Күн бұрын
LMAO... time to open fashion shop
@Lucy-z5d
@Lucy-z5d Күн бұрын
My output video only do 3sec. My original video is 10sec. How do I get it to do the whole 10sec?
@jfk3465
@jfk3465 2 күн бұрын
when i click, the alpha texture is imprinted into the mesh along with the black box surrounding it. How can i prevent this and only have the texture imprinted like you have done?
@SuperCinema4d
@SuperCinema4d 2 күн бұрын
How aply pose?
@rachinc
@rachinc 2 күн бұрын
4:40 you dont show us how you got out of that drawing screen. its cut off in your video
@rachinc
@rachinc 2 күн бұрын
never mind i figured it out, its a separate tab in the internet browser that you simply close out of
@gbmssn9422
@gbmssn9422 2 күн бұрын
thx for movie more this ERROR: occurred when executing CheckpointLoaderSimple: ERROR: Could not detect model type of: C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\models\checkpoints\flux1-fp8.safetensors File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI odes.py", line 539, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 527, in load_checkpoint_guess_config raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))
@huichan5140
@huichan5140 2 күн бұрын
interesting
@FranckSitbon
@FranckSitbon 2 күн бұрын
it seems that the upscaler does not only upscale the image 😁😄😄😇
@swillklitch
@swillklitch 2 күн бұрын
Magic!
@yiluwididreaming6732
@yiluwididreaming6732 2 күн бұрын
reference @stevietee3878 advice below. The workflow had been working but now getting following error.... # ComfyUI Error Report ## Error Details - **Node Type:** ExpressionEditor - **Exception Type:** safetensors_rust.SafetensorError
@zanzo98
@zanzo98 2 күн бұрын
Where do i get the model?
@pavelkolotenko6973
@pavelkolotenko6973 2 күн бұрын
Hello! Thanks for the video, it's great! I get an error: The request output failed validation. CLILoader: - Value is missing from the list: clip_name: 't5\google_t5-v1_1-xxl_encoderonly-fp8_e4m3fn.safetensors' is missing from [] How to fix this? Thank you!
@MrMstoe
@MrMstoe 2 күн бұрын
Why the extra VAE Encode -> VAE Decode step at the end?
@panonesia
@panonesia 2 күн бұрын
can you make workflow using gguf models?
@CgTopTips
@CgTopTips 2 күн бұрын
I haven't tried it, but it might be possible
@focus678
@focus678 2 күн бұрын
How can my ComfyUI have "Anything everywhere" node ?.
@CgTopTips
@CgTopTips 2 күн бұрын
github.com/chrisgoringe/cg-use-everywhere
@focus678
@focus678 2 күн бұрын
@@CgTopTips Thank you.
@FoxHoundUnit89
@FoxHoundUnit89 2 күн бұрын
You lost me from the start, my ComfyUI doesn't have a "manager" button haha
@yngeneer
@yngeneer 2 күн бұрын
Hi there! The model files in Cog folder takes 40GB !!! I miss any mention of that in advance :( ... EDIT : UGH....but 19.6GB are in the '.git' folder .... wtf??? deleted that...sry for spamm
@tetsuooshima832
@tetsuooshima832 3 күн бұрын
I'm curious, why is it better than 16 or 20 steps directly, instead you do 8+8 steps ? Anyway if that "Unsample" can help remove that annoying grid noise we get after upscaling Flux, I'm all for it
@CgTopTips
@CgTopTips 2 күн бұрын
You're right; you can increase the steps, and usually, Flux produces high-quality images unlike SD1.5 and SDXL, so a refiner isn't necessary. However, sometimes, even with increased steps, some details in Flux might not be clearly visible, and you might want to use a refiner
@helveticafreezes5010
@helveticafreezes5010 3 күн бұрын
Practically unusable. The results from this (if you can work through the bugs and get it functioning) are worse than img2vid from a year ago. It does follow the pose well, but the generation is blotchy and inconsistent. I used the exact pics/vid and settings in this tutorial and it looks nothing like his results. Unsubbed and thumbs DOWN
@yiluwididreaming6732
@yiluwididreaming6732 3 күн бұрын
You are one of the most prolific creators on youtube!! finding it hard to keep up LOL!! Keep up the good work
@gohan2091
@gohan2091 3 күн бұрын
Is the "latent upscale by factor (WAS)" suppose to increase image dimensions? I've tried values of 1.5 and 2 expecting the image dimensions to increase by half and by double yet the image dimensions are always the same regardless.
@loquacious1956
@loquacious1956 3 күн бұрын
Latent upscale definintely works. I put in a value of 2 and got a 2048 by 2048 final result.... only works for the final image (image b ?)
@CgTopTips
@CgTopTips 2 күн бұрын
This node specializes in upscaling latent images by a specified factor, utilizing various interpolation methods to enhance the resolution while maintaining the integrity of the original image. It provides flexibility in scaling and alignment options to cater to different upscaling needs.
@rmeta3391
@rmeta3391 3 күн бұрын
Nice work, anything I can find to reduce the plastic face look from Flux.
@CgTopTips
@CgTopTips 2 күн бұрын
The refiner isn't very important in Flux because the images produced with Flux, unlike those created with SD1.5 or SDXL, generally have good quality and detail. I just wanted to highlight that the refiner can be useful in Flux on certain occasions.
@MilesBellas
@MilesBellas 3 күн бұрын
thanks!😊
@CgTopTips
@CgTopTips 2 күн бұрын
🙏
@Lifejoy88
@Lifejoy88 3 күн бұрын
Hi, bro! Need your advice! Is it possible to do the following using ControlNet and Flux? The girl in the generated photos should always have the same face from the original photo 1. The object in her hands will also be created from the original photo 2, for example a bottle of water. The generated photo shows a girl with a bottle of water. In another prompt, the generation again uses the original photo 1 with a girl. Source photo 2 - Coca-Cola bottle. On The generated photo is a girl with a bottle of Coca-cola in her hand. it is necessary to change different situations in the background and different poses of the girl in the generated photos, and at the same time the girl must have the same face.
@HOTAIR83
@HOTAIR83 3 күн бұрын
Nawet wirtualna rzeczywistość nie będzie już rzeczywista :/
@Hackinside
@Hackinside 3 күн бұрын
Did you try new methods to restore old pictures or colourise black and white images with FLUX?
@CgTopTips
@CgTopTips 2 күн бұрын
Thanks, I'll make a workflow for that
@sergiogonzalez2611
@sergiogonzalez2611 3 күн бұрын
this could work in stableDifusion FORGE?
@weijiang1230
@weijiang1230 3 күн бұрын
Thank you for sharing, but this workflow will alter the subject's appearance. Is it possible to add a mask to exclude the face or head area?
@VazgenAkopov1976
@VazgenAkopov1976 3 күн бұрын
LoadTheMistoFluxControlNet Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 7.26 GiB Requested : 144.00 MiB Device limit : 8.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB
@ZakariaNada
@ZakariaNada 3 күн бұрын
So I installed the nodes, but using your workflow doesn't give me consistent results. The result is totally different than the controller shape. How to fix that?
@user-qg4gx7be2j
@user-qg4gx7be2j 3 күн бұрын
How were you able to install the nodes, what did you do?
@ZakariaNada
@ZakariaNada 3 күн бұрын
@@user-qg4gx7be2j git clone (github link) in the CMD nodes folder
@scottownbey9340
@scottownbey9340 3 күн бұрын
ditto that
@ZakariaNada
@ZakariaNada 3 күн бұрын
@@user-qg4gx7be2j git clone (github link) inside CMD of custom nodes
@edwardtse8631
@edwardtse8631 2 күн бұрын
You need to up the steps, the default 15 steps won't work, at least 20 steps.
@kacperwozniak2982
@kacperwozniak2982 3 күн бұрын
That looks amazing, shout out to you! But can you explain to me what's your thought process behind it, how do you know how to combine all the components? Do you recommend any materials that helps in understanding things that happen here?