You can find the workflow here: openart.ai/workflows/risunobushi/relight-people-preserve-colors-and-details/W50hRGaBRUlBT1ReD4EF Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
@sumeetprashant16 ай бұрын
amazing to see that you made the node yourself.. more power to you and the community..
@neaonsjaji16 күн бұрын
you are so underrated. i like u using artist perspective combined with technical perspective explanation, thanks for teaching us and keep it up!
@jbrocktheworld6 ай бұрын
I have no idea whats going on in the node but the result work like a charm,Thank you so much!
@risunobushi_ai6 ай бұрын
ahah I know that the You Can Ignore This Group is a bit of a tangle, but I promise it's nothing too fancy! Glad it's working for you!
@SimonDickerman6 ай бұрын
Thank you so much for sharing this, I can't wait to play around with it this week. You post some of the most useful SD videos on KZbin.
@risunobushi_ai6 ай бұрын
thank you!
@Douchebagus6 ай бұрын
This is without doubt the best comfyui workflow and explanation on youtube. Thank you so much for sharing, liked and subscribed.
@risunobushi_ai6 ай бұрын
Thank you for the kind words!
@neezamuddinfaayez72342 ай бұрын
Hello brother, can’t access the link to the workflow.
@moonson81013 ай бұрын
Really really helpful, thank you for this valuable tutorial!!!
@Ginartmedia4 ай бұрын
I set the size to a rectangle but the result is still square. Please guide me, thank you
@destructiveeyeofdemi6 ай бұрын
I love your work Sir. Thank you.
@dakshroy13263 ай бұрын
LoadAndApplyICLightUnet IC-Light: Could not patch calculate_weight - IC-Light: The 'calculate_weight' function does not exist in 'lora' give this error can you help me in this ?
@hoangucmanh2995 ай бұрын
how to make it generate a new background?
@oyentemaniatico3 ай бұрын
where's the json?
@risunobushi_ai3 ай бұрын
It should be both in the description and in the pinned comment
@yotraxx6 ай бұрын
Thank you Andrea. REALLY useful, as usual. Keep going on, as usual :)
@risunobushi_ai6 ай бұрын
Thank you! Will do!
@ImAlecPonce6 ай бұрын
I really loved your wortkflow :) I just modified it so it takes on the pixel size of what ever image you put in. I hope that's ok. ..... squares drive me crazy haha
@risunobushi_ai6 ай бұрын
Sure! There's so many ways to resize images, I just default to a X/Y resizer set to square because that's the most common config
@andresbares6 ай бұрын
This seems like a great workflow! I allmost got it running, but when the mask is generated, it shows a tiny black square as the preview after "Convert mask to image" So the first relit image also shows as a tiny square, i've been playing with the image resize parameters but it doesn't seem to change anything. Any advice will be apreciated!
@risunobushi_ai6 ай бұрын
Hi! You're most probably either: - not drawing a mask on the preview bridge node where the light masks are created, or - not importing a custom mask AND connecting the mask output to the grow mask with blur node If IC-Light doesn't see a light mask, you get a tiny little box
@andresbares6 ай бұрын
@@risunobushi_ai Thanks for the response! Indeed I got it after I drew the mask! Im taking my first steps with AI and you were a great help, Thanks for your content! Greetings from Argentina
@hoangucmanh2995 ай бұрын
how to integrate to this so that you can generate a new background ?
@risunobushi_ai5 ай бұрын
I already replied to your comments about this, but these workflows aim at relighting and generating a new background based on images through ipadapters rather than promoting. If you want to generate a new background with prompting alone, you’d need to modify the workflow at that level, by unplugging the IPAdapter and fixing the parameters, such as denoise, to account for the radical changes in background you want.
@yunpengwang6 ай бұрын
I want to know why there is an error in the faceid part: the clipvision model cannot be found. I have downloaded the CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors model and put it in the clipvision file of models, but I don’t know. Because the model was downloaded incorrectly and other issues were collected, this error has not been resolved.
@risunobushi_ai6 ай бұрын
Did you download both the ViT-bigG and ViT-H models? Do you have insightface installed properly?
@yunpengwang6 ай бұрын
In the color matching image I encountered the problem "The size of tensor a (64) must match the size of tensor b (1152) at non-unidimensional 1", and the problem of missing facial segmentation and facial analysis models, and wanted to know How do I deal with him? Thanks
@risunobushi_ai6 ай бұрын
You're most probably not painting the light mask in the light mask group's preview bridge, or haven't hooked up the load image as mask node into the grow mask with blur node if you're importing a custom light mask
@egarywi16 ай бұрын
Nearly got this going, however, I have 1 issue that I cant resolve, in the Face Segmentation Node: Error occurred when executing FaceSegmentation: 'NoneType' object is not subscriptable File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/custom_nodes/ComfyUI_FaceAnalysis/faceanalysis.py", line 531, in segment landmarks = landmarks[-2]
@risunobushi_ai6 ай бұрын
Do you have insightface installed? I know it’s a pain to install it on macs
@cuonghoangcao77785 ай бұрын
i can not install node " ComfyUI-Image-Filters" , do you know how to fix it
@risunobushi_ai5 ай бұрын
have you tried uninstalling them and then installing them via git clone in the ComfyUI/custom_nodes/ directory, then pip install -r requirements.txt in their folder?
@Kal-el235 ай бұрын
Kind of new to this and recently came across your channel -- Im curious how this compares to doing 'scene replace' by Krea, since you can basically add in a different background and relight it. Or would that be a different workflow?
@risunobushi_ai5 ай бұрын
Hi! Both Krea and Magnific are basically either using IC-Light, or a modified version of it, at least from what I can tell (the issues that they have with relighting are the same you get with IC-Light if you're not fixing stuff with Frequency Separation and color matching like I'm doing). So yeah, scene replace is either the FBC IC-Light model with light transfer from background, or the FC IC-Light model with light mask transfer set on a "global light mask" approximation (or any other equivalent, custom made, derivative models).
@Kal-el235 ай бұрын
@@risunobushi_ai Thank you. So to basically do a 'scene replace' which one of your workflows do you recommend, this one?
@risunobushi_ai5 ай бұрын
this one is the latest and simplest from a user experience pov. you just need to input three images (all three, even as a placeholder if you don't plan on using some), select the right switch inputs, two prompts, and hit queue prompt: kzbin.info/www/bejne/faStkqSbqMeiitE
@svoj10005 ай бұрын
Hey Andrea! This is nuts! thank you so much for making this!!! Unfortunately as I try running this workflow it keeps giving me this error! --> "Cannot execute because a node is missing the class_type property.: Node ID '#433'" Have you encountered this before?
@risunobushi_ai5 ай бұрын
Hi! thanks! Which node does the workflow stop at? Or it doesn't start at all?
@Arknight-p2l6 ай бұрын
i get this error: Error occurred when executing FrequencyCombination: operands could not be broadcast together with shapes (550,3,1000) (544,3,1000)
@risunobushi_ai6 ай бұрын
This one's on me being a bad coder (well, technically not a coder at all) and not having accounted for unusual WxH ratios when scripting the Frequency Separation nodes. I'm going to add a image resize node after the relit image so this gets solved and update the workflow. Check back in 5 minutes and download it again.
@risunobushi_ai6 ай бұрын
Updated.
@Arknight-p2l6 ай бұрын
Wow thank you so much 🫶🏼
@Taz_Olson3 ай бұрын
is there a known reason behind why none of the nodes on the second half work?
@risunobushi_ai3 ай бұрын
It’s either the level adjustment not that got updated and I haven’t had the time to fix the Jsons yet, or it’s a comfy update that broke kijay’s IC-light nodes and you need to update the ic light repo. Unfortunately these update happens, and I don’t always have the time to update my workflows since I published them a while ago.
@bipinpeter78206 ай бұрын
Super cool 👍
@keremoganvfx6 ай бұрын
hey thanks for your great tutorial. I'm totally new at stable diffusion and comfy UI, I'm a vfx compositor and using node-based software called Nuke. that's why the comfy UI caught my attention a lot. I'm at the stage of watching many videos these days. and thanks for all of videos. Have a question, instead of jpgs or pngs, can we work with EXR or DPX files in comfy UI generally? for inpaint or relight purposes? DPXs are 10-16 bit usually, and exr's are 16 bit half float as well.... I'm doing it as sending a frame from Nuke to Photoshop and doing some generative fills and export back to Nuke... I love generative fill but control-wise it's not that great. I'm really impressed by comfy UI/Stable diffusion and I hope I can use it in my pipeline. thanks
@risunobushi_ai6 ай бұрын
Hey there, thanks for the kind words! Unfortunately AFAIK while comfyUI accepts 32bit files, and EXR with some custom nodes, and can theoretically output 32bit files, anything inside of it is processed at 8bits, as the models are trained with that color depth. That's part of the reason why color matching is so hard, 8bit just isn't enough to do any meaningful post processing. That being said, a viewer reached out and they have a Nuke tutorial about extracting normal maps from comfyUI using IC-Light and using it in Nuke, you can find it here: kzbin.info/www/bejne/eajLgmd6oZx5pJo
@keremoganvfx6 ай бұрын
@@risunobushi_ai thanks for your answer! you had even shared a video with Nuke thanks :)) yeah actually I'd seen that video but especially AOV passes must be 32bit... if I can import 10-16 or 32 bit files to comfyUI somehow, then there must be some solutions I can achieve, I can just render the 10-16bit files in sRGB colorspace before sending comfyui so there won't be any overexposed data unless there are ultra bright things... it must be working like 8bits... though AI generated parts will be 8 bit quality I guess... will do some tests...I'm still watching many videos before starting tests. thanks again for your quick response and your great videos!!
@xdevx96236 ай бұрын
Hey man can you make use of IDM-VTON as it very good with putting your choice of clothes in ai images but it does require some refining and the part of refining is what I can't figure out, please man it would help me a lot!
@risunobushi_ai6 ай бұрын
I've seen a new zero-shot research from researchers at google that looks promising, but IDM and the likes are not there yet, there's no amount of refining that can fix the missing precision from IDM and other zero-shot vitons right now. In the future, yeah, but there's a reason why Google and Alibaba are spending big money to research this.
@yanmotta6 ай бұрын
Bravo!
@motoorikosuzu62512 ай бұрын
This relight and background generate only fit for some normal product. If product like machine then it will not works and low quality
@ismgroov40946 ай бұрын
Thx sir
@mohammednasr74226 ай бұрын
Hi andrea, I hope you're doing well! I could really use your help with ComfyUI IC-Light. Would it be possible to set up a quick Discord call to discuss it? It won't take much of your time, and I would greatly appreciate it. Thank you so much
@risunobushi_ai6 ай бұрын
Hey there! Please send me a email at andrea@andreabaioni.com, this week and the coming weeks are packed with calls and deadlines and I can’t do much one-on-ones
@mohammednasr74226 ай бұрын
@@risunobushi_ai Thank you so much for your quick response, Andrea! I understand you're very busy. I'll send you an email shortly. I really appreciate your willingness to help!