Relight anything with IC-Light in Stable Diffusion - SD Experimental

  Рет қаралды 7,331

Andrea Baioni

Andrea Baioni

Күн бұрын

Relighting has always been a weakness of any Stable Diffusion workflow, until now!
In this Stable Diffusion Experimental episode, we'll take a look at a new node suite that is both exciting and really close to being production ready, IC Light.
Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
Workflows:
- Workflow 1, relight from mask: openart.ai/workflows/risunobu...
- Workflow 2, relight from background (and mask): openart.ai/workflows/risunobu...
- Workflow 3, relight product photos: openart.ai/workflows/risunobu...
IC-Light (github):
- by Kijai (the one we're using in this video): github.com/kijai/ComfyUI-IC-L...
- by Huchenlei (another project being developed, needs a bit more dependencies): github.com/huchenlei/ComfyUI-...
- by Illyasviel (first implementation, works with AUTO1111, amazing source for infos as well): github.com/lllyasviel/IC-Light
Models:
- IC Light models (for comfyUI, download the FBC and the FC models): huggingface.co/lllyasviel/ic-...
- any SD 1.5 models, like Photon: civitai.com/models/84728/photon
- or epicRealism: civitai.com/models/25694/epic...
Timestamps:
00:00 - Intro
01:07 - IC-Light overview
02:09 - IC-Light github and models
03:49 - Workflow 1: Relight with Mask as Light Source
11:16 - Workflow 2: Relight with Background as Light Source
15:14 - Workflow 3: Relight Product Shots
16:17 - Final considerations and Outro
#stablediffusion #ic-light #iclight #stablediffusiontutorial #relight #ai #generativeai #generativeart #comfyui #comfyuitutorial #risunobushi_ai #sdxl #sd #risunobushi #andreabaioni

Пікірлер: 46
@houseofcontent3020
@houseofcontent3020 Ай бұрын
This is a great video! Thanks for sharing the info.
@zeeyannosse
@zeeyannosse Ай бұрын
BRAVO ! thanks for sharing!. super interesting development !
@risunobushi_ai
@risunobushi_ai Ай бұрын
Thanks, Glad you liked it!
@pranavahuja1796
@pranavahuja1796 Ай бұрын
Things are getting so exciting🔥
@risunobushi_ai
@risunobushi_ai Ай бұрын
Indeed they are!
@xxab-yg5zs
@xxab-yg5zs Ай бұрын
Those videos are great, please keep them coming up. Im totally new to SD and Comfy, you actually make me believe it can be used in a professional, productive way.
@risunobushi_ai
@risunobushi_ai Ай бұрын
It can definitely be used as a professional tool, it all depends on the how!
@dtamez6148
@dtamez6148 Ай бұрын
Andrea, I really enjoyed your live stream and your interaction with those of us who were with you. However, this follow up on the node, the technical aspects, and your insight as a photographer is Outstanding. Excellent work!
@risunobushi_ai
@risunobushi_ai Ай бұрын
Thank you! I’m glad to be of help!
@JohanAlfort
@JohanAlfort Ай бұрын
Nice insight to this new workflow, super helpful as usual :) This opens up a whole lot of possibility! Thanks and keep it up.
@risunobushi_ai
@risunobushi_ai Ай бұрын
Yea it does! I honestly believe that this is insane for product photography
@uzouzoigwe
@uzouzoigwe Ай бұрын
Well explained and super useful for image composition. I expect that a small hurdle might be when it comes to reflective/shiny objects...
@risunobushi_ai
@risunobushi_ai Ай бұрын
I’ll be honest, I haven’t tested it yet with transparent and reflective surfaces, now I’m curious about it. But I expect it to have some issues with them for sure
@antronero5970
@antronero5970 Ай бұрын
Number one
@aynrandom3004
@aynrandom3004 Ай бұрын
Thank you for explaining the actual workflow and the function of every node. I also like the mask editor trick. Just wondering why some of my images also changed after the lighting is applied? Sometimes there are minimal changes with the eyes, face etc
@risunobushi_ai
@risunobushi_ai Ай бұрын
Thanks for the kind words. If I were to make it easier to understand, the main issue with prompt adherence lies in the CFG value. Usually, you’d want to have a higher CFG value in order to have better prompt adherence. Here, instead of words in the prompt, we have an image being “transposed” via what I think is a instruct pix2pix process on top of the light latent. Now, I’m not an expert on instruct pix2pix workflows, since it came out at a moment in time where I was tinkering with other AI stuff, but from my (limited) testing, it seems like the lower the CFG, the more the resulting image is adherent to the starting image. In some cases, as we’ll see today on my livestream, a CFG around 1.2-1.5 is needed to preserve the original colors and details.
@aynrandom3004
@aynrandom3004 Ай бұрын
@@risunobushi_ai thank you! Lowering the cfg value worked. :D
@dreaminspirer
@dreaminspirer Ай бұрын
I would SEG her out from the close up. then draft composite her on the BG. This probably reduces the color cast :)
@risunobushi_ai
@risunobushi_ai Ай бұрын
Yup, that’s what I would do too. And maybe use a BW Light Map based on the background remapped on low-ish white values as a light source. I’ve been testing a few different ways to solve the background as a light source issues and what I got up till now is that the base, non background solution is so good that the background option is almost not needed at all.
@PierreGrenet-ty4tc
@PierreGrenet-ty4tc Ай бұрын
This is a great tutorial, thank you ! ...but how to use ic light with sd web UI. I have just installed it but it doesn't appear anywhere 😒😒 could help ?
@risunobushi_ai
@risunobushi_ai Ай бұрын
Uh, I was sure there was an automatic1111 plugin already released, I must have misread the documentation here: github.com/lllyasviel/IC-Light Have you tried the gradio implementation?
@mohammednasr7422
@mohammednasr7422 Ай бұрын
hi dear Andrea Baioni I am very interested in mastering Comfy UI and was wondering if you could recommend any courses or resources for learning it. I would be very grateful for your advice
@risunobushi_ai
@risunobushi_ai Ай бұрын
Hey there! I'm not aware of paid comfyUI courses (and I honestly wouldn't pay for them, since most, if not all of the information needed is freely available either here or on github). If you want to start from the basics, you can start either here (my first video, about installing comfyUI and running your first generations): kzbin.info/www/bejne/eXWUin-DftN5msU or look up a multi-video basic course, like this playlist from Olivio: kzbin.info/www/bejne/gn-ynZ5upN9kpLs
@twilightfilms9436
@twilightfilms9436 Ай бұрын
Does it work with batch sequencing?
@risunobushi_ai
@risunobushi_ai Ай бұрын
I haven’t tested it with batch seq, but I don’t see why it wouldn’t in its version that doesn’t require custom masks applied on the preview bridge nodes, and instead relies on custom maps from load image nodes. I’ve got a new version coming on Monday that preserves details as well, and that can use automated masks from the SAM group, you can find the updated workflow on my openart profile in the meantime.
@Architectureg
@Architectureg 22 күн бұрын
how to make sure the input picture doesn't change in the output? it seems to change how can i keep it exaclty and just manipulate thelight instead?
@risunobushi_ai
@risunobushi_ai 22 күн бұрын
My latest video is about that, I added both a way to preserve details through frequency separation and three ways to color match
@cycoboodah
@cycoboodah Ай бұрын
The product I'm relighting changes drastically. It basicaly keeps the shape but introduces too much of latent noise. I'm using your workflow without touching anything but I'm getting a very different results.
@risunobushi_ai
@risunobushi_ai Ай бұрын
That's weird, in my testing I sometimes get some color shift but most of the times the product remains the same. Do you mind sending me the product shot via email at andrea@andreabaioni.com? I can run some tests on it and check what's wrong. If you don't want or can't share the product, you could give me a description and I could try generating something similar, or looking up on the web for something similar that already exists.
@risunobushi_ai
@risunobushi_ai Ай бұрын
Leaving this comment in case anyone else has issues, I tested their images and it works on my end. It just needed some work on the input values, mainly CFG and multiplier. In their setup, for example, a lower CFG (1.2-ish) was needed in order to preserve the colors of the source product.
@syducchannel9451
@syducchannel9451 Ай бұрын
Can you guide me how to use Ic - light in Google Colab?
@risunobushi_ai
@risunobushi_ai Ай бұрын
I'm sorry, I'm not well versed in Google Collab
@JavierCamacho
@JavierCamacho Ай бұрын
Sorry to bother you, I'm stuck in comfyui. I need to add AI people to my real images. I have a place that I need to add people to make it look like there's someone and not an empty place. I've look around but I came up short. Can you point me to the right direction?
@risunobushi_ai
@risunobushi_ai Ай бұрын
Hey! You might be interested in something like this: www.reddit.com/r/comfyui/comments/1bxos86/genfill_generative_fill_in_comfy_updated/
@JavierCamacho
@JavierCamacho Ай бұрын
@@risunobushi_ai i'll give it a try. Thanks
@JavierCamacho
@JavierCamacho Ай бұрын
@@risunobushi_ai so I tried running it but I have no idea what I'm suppose to do. Thanks anyways.
@StringerBell
@StringerBell Ай бұрын
Dude, I love your videos but this ultra-closeup shot is super uncomfortable to watch. It's like you're entering my personal space :D It's weird and uncomfortable but not in the good way. Don't you have a wider lens than 50mm?
@risunobushi_ai
@risunobushi_ai Ай бұрын
The issue is that I don't have anymore space behind the camera to compose a different shot, and if I use a wider angle some parts of the room I don't want to share get into view. I'll think of something for the next ones!
@yangchen-zd9zl
@yangchen-zd9zl Ай бұрын
Hello, I am a ComfyUI beginner. When I used your workflow, I found that the light and shadow cannot be previewed in real time, and when the light and shadow are regenerated to the previously generated photo, the generation will be very slow, and the system will report an error: WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])
@risunobushi_ai
@risunobushi_ai Ай бұрын
Sorry, but I’ll have to ask a few questions. What OS are you on? Are you using a SD 1.5 model or a SDXL model? Are you using the right IC-Light model for the scene you’re trying to replicate (fbc for background relight, fc for mask based relight)?
@yangchen-zd9zl
@yangchen-zd9zl Ай бұрын
@@risunobushi_ai Sorry, I know the key to the problem. The first is because I did not watch the video tutorial carefully and ignored downloading fbc. The second is the image size problem. After downloading fbc, I adjusted the image size (512 pixels × 512 pixels) The drawing efficiency is much higher, thank you very much for this video. In addition, I would like to ask if I want to add some other products to this workflow, that is, product + background for light source fusion, what should I do?
@risunobushi_ai
@risunobushi_ai Ай бұрын
I cover exactly that (and more) in my latest live stream from yesterday! I demonstrate how to generate an object (but you can just use a load image node with a already existing picture), use segment anything to isolate it, generate a new background, merge the two together, and relight with a mask so that it looks both more consistent and with better lighting than just using the optional background option in the original workflow. For now, you’d need to follow the process in the livestream to achieve it. In a couple of hours I will update the video description with the new workflow, so you can just import it.
@yangchen-zd9zl
@yangchen-zd9zl Ай бұрын
@@risunobushi_ai Thank you very much for your reply. I watched the live broadcast in general and learned how to blend existing images with the background. By the way, in the video, I saw that the pictures you generated were very high-definition and close to reality, but when I generated them, I found that the characters would have some deformities and the faces would become weird. I used the Photon model.
@houseofcontent3020
@houseofcontent3020 Ай бұрын
I'm trying to work with the background and foreground images mix workflow you shared and I keep getting errors, even though I carefully followed your video step by step. Wondering if there's a way to chat with you and ask you a few questions. Would really appreciate it :) Are you on Discord?
@risunobushi_ai
@risunobushi_ai Ай бұрын
I'm sorry, but I don't usually do one on ones. The only errors screen I've seen in testing are due to mismatched models. Are you using a 1.5 model with the correct IC-Light model? i.e.: FC for no background, FBC for background?
@houseofcontent3020
@houseofcontent3020 Ай бұрын
That was the problem. Wrong model~ Thank you :) @@risunobushi_ai
Any Node: the node that can do EVERYTHING - SD Experimental
17:16
Andrea Baioni
Рет қаралды 4,8 М.
Relight and Preserve any detail with Stable Diffusion
19:02
Andrea Baioni
Рет қаралды 8 М.
🍕Пиццерия FNAF в реальной жизни #shorts
00:41
MEU IRMÃO FICOU FAMOSO
00:52
Matheus Kriwat
Рет қаралды 15 МЛН
Василиса наняла личного массажиста 😂 #shorts
00:22
Денис Кукояка
Рет қаралды 7 МЛН
23 AI Tools You Won't Believe are Free
25:19
Futurepedia
Рет қаралды 1,9 МЛН
ComfyUI Multi ID Masking With IPADAPTER Workflow
12:31
Grafting Rayman
Рет қаралды 8 М.
Stable Diffusion IMG2IMG HACKS You need to TRY!
6:15
Incite AI
Рет қаралды 28 М.
Why Unreal Engine 5.4 is a Game Changer
12:46
Unreal Sensei
Рет қаралды 1 МЛН
Stable Diffusion 3 IS FINALLY HERE!
16:08
Sebastian Kamph
Рет қаралды 64 М.
ComfyUI for Everything (other than stable diffusion)
32:53
Design Input
Рет қаралды 22 М.
How charged your battery?
0:14
V.A. show / Магика
Рет қаралды 6 МЛН
Нашел еще 70+ нововведений в iOS 18!
11:04
После ввода кода - протирайте панель
0:18
Bluetooth Desert Eagle
0:27
ts blur
Рет қаралды 8 МЛН