Since I made this video I added a "precise style transfer" node to the IPAdapter. You can use that instead of fiddling with the Mad Scientist. It also works with SD1.5 (to some extent). Also since I've been asked quite a few times now... sorry, we do not have exact data of what each block does. 3 and 6 are pretty strong so it was easy but other layers have also some impact on both the composition and the style. Some seems to effect text, others background, others age. But at the moment it doesn't seem there is a "definitive guide". I would have told you otherwise 😅
@flankechen2 ай бұрын
thanks a lot, so in SD1.5, which block for style which for composition?
@CaraDePatoGameplays2 ай бұрын
This intrigued me, I'm going to do a lot of tests to see what they do besides 3 and 6
@animatedstoriesandpoemsАй бұрын
Why any of your tutorial never work when we try ??
@831digital3 ай бұрын
Best Comfyui channel on KZbin.
@miguelitohacks3 ай бұрын
x4096 agree
@leolis783 ай бұрын
Matteo, your work is amazing! You are our Dr. Brown. Our mad scientist who will give 1.21 Gigawatts to the AI to take us to the future. We love you!!! 😄😄😄
@latentvision3 ай бұрын
just doing my part!
@ooiirraa3 ай бұрын
@@latentvision and we are doing our part loving you and being grateful 🎉
@caseyj7894563 ай бұрын
Yeah you are our mad scientist 😂 ❤ Merci Mateo !
@MarcSpctr3 ай бұрын
This guy is literally equivalent to what Piximperfect is to Photoshop. I doubt even the people who worked on SDXL had any idea that this much stuff and control can be gained over the models. Like seriously, wtf ???? Amazing work.
@saschamrose64983 ай бұрын
i would say more like video co pilot is to after effects
@latentvision3 ай бұрын
nnaah I guess that the difference is just that I actually share what I find
@GG-hh1sl3 ай бұрын
@@latentvision lol
@DarioToledo3 ай бұрын
Unm3sh
@rhaedas90853 ай бұрын
@@latentvision Share, and explain. You're like that one teacher that didn't just show you the math formula, but showed why it was important and how to use it practically.
@moviecartoonworld44593 ай бұрын
"While keeping up with the influx of new features is important, I'm reminded again of the value of in-depth understanding of a single function. Thank you as always."
@Showdonttell-hq1dk3 ай бұрын
This is so incredibly cool! Thank you very much. I can't even imagine how nerve-wracking and exciting the coding was for this. :)
@Archalternative2 ай бұрын
Matteo sei davvero incredibile con il tuo lavoro... 🎉
@gsMuzak3 ай бұрын
you're the man, thanks for all this tutorials!
@jibcot85413 ай бұрын
Very cool, I need to play with IPAdapter more often, but I am often too busy just improving prompts and upscale workflows!
@madmushroom86393 ай бұрын
Very cool! Would love to see some coding sessions. Maybe you could explain your code a bit. More info about the vector sizes, layers etc :)
@latentvision3 ай бұрын
I was thinking about that... not sure how much interest there would be on that though
@madmushroom86393 ай бұрын
@@latentvision Yeah maybe, but your "ComfyUI: Advanced Understanding (Part 1)" video actually performed really well I think, where you went into more details. That plus some code examples what is going on behind the scenes with your knowledge would be awesome! Maybe a small poll could show if its worth your time :)
@fukong3 ай бұрын
God of IPAdapter
@Firespark813 ай бұрын
This is awesome! ty!
@noxin73 ай бұрын
Mateo, This is amazing work with the mad scientist node - My only question (not criticism) is if you plan to convert the index:weight string into widgets for ease of use or is there something that prevents that?
@latentvision3 ай бұрын
yeah I can do that :)
@igorkotov89373 ай бұрын
Thank you!
@MrGingerSir3 ай бұрын
This is awesome! Are you planning on making a version that works with embeds?
@latentvision3 ай бұрын
why not :)
@MrGingerSir3 ай бұрын
@@latentvision sweet!
@SouthbayCreations3 ай бұрын
Great video, thank you! Where can we find this node?
@2shinrei3 ай бұрын
🤯
@yvann.mp43 ай бұрын
amazing, thanks a lots
@gsMuzak3 ай бұрын
a newbie question (maybe), index 3 is composition and 6 is style, what are the others? I don't remember if you have already talked about them in your other ipadapters videos
@rhaedas90853 ай бұрын
Look at his video a few weeks about about prompting the individual UNet blocks, that's what's going on here. There's still a lot to figure out, and some may be still dependent on others so it's not as clear cut as these.
@gsMuzak3 ай бұрын
@@rhaedas9085 thanks
@aidiffuser3 ай бұрын
Hello man, thanks for sharing this amazing improvement on control! Did something change between the style transfer and composition from 2 days ago to this release? I cant seem to reproduce same results :( Or, is there a way to reproduce the exact same layer weights of that previous release within the mad scientist node?
@latentvision3 ай бұрын
no, style and composition should be the same. if you have issues please post an issue on the official repository with a before/after images possibly
@АлексейЧерников-б2е3 ай бұрын
CosXL-edit does not work if the source image is large (mine is 3840*2160)
@anton538122 күн бұрын
Thanks for the video! But can't make it work.... do everything like you but getting "Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1664])." could you please help me?
@nkofr3 ай бұрын
Hi, thanks, wonderful! I just don't understand the point of this custom node having "weight_type" field if we modify the layers' weights in the bottom input field? Is "weight_type" overriden by the values in the input field?
@latentvision3 ай бұрын
"style transfer precise" uses a different strategy to apply the embeds. You need to use it only if you want to do the style transfer thing. If you want to experiment with blocks you can select whatever and it will be overwritten (except again "precise")
@nkofr3 ай бұрын
@@latentvision Thank you Matteo, that's awesome! Grazie
@elegost25702 ай бұрын
@latentvision Is it possible to combine the image to image workflow along with even more control to resemble the input? Aka, control net type of options.
@latentvision2 ай бұрын
yes of course!
@elegost25702 ай бұрын
@@latentvision do you have any pointers in that regard? I’ve tried a few things but keep getting errors :(
@AlyValley3 ай бұрын
subscribed, there is (Real High Values here) im a starter of my channel, and for sure wold love to shout out for you on my (just started channel) but im targeting (digital marketing niech) and still figuring out how to involve those AI into that marketing advertising industry in the end, Thank you for being generoius :)
@flankechen2 ай бұрын
amazing work, anyone test mad scientist in SD1.5? how is the specific block to inject attn work?
@latentvision2 ай бұрын
I made a new "precise style transfer" node that should work with SD1.5 and makes the whole process simpler
@afrosymphony82073 ай бұрын
please is the prompt injection node out yet???
@farey13 ай бұрын
Any clue why I am not able to run cosXL workflow, please???? I keep getting this error all the time: Error occurred when executing SamplerCustomAdvanced: tuple index out of range File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy_extras odes_custom_sampler.py", line 557, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 684, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 663, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 568, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\k_diffusion\sampling.py", line 599, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 291, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 650, in __call__ return self.predict_noise(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy_extras odes_custom_sampler.py", line 469, in predict_noise out = comfy.samplers.calc_cond_batch(self.inner_model, [negative_cond, middle_cond, self.conds.get("positive", None)], x, timestep, model_options) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 226, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\model_base.py", line 113, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 887, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 26, in __call__ out = out + callback(out, q, k, v, extra_options, **self.kwargs[i]) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 139, in ipadapter_attention ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 139, in ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0) Queue size: 0 Extra options * ipadapter_cosxl_edit
@latentvision3 ай бұрын
you probably don't have enough vram and/or ram
@farey13 ай бұрын
@@latentvision Thank you. I have 64GB of ram and rtx 4080 super gpu. Do you think it's not enough?
@latentvision3 ай бұрын
@@farey1 there must be something eating up your resources. it's hard to help you from an YT comment
@ParrotfishSand3 ай бұрын
🙏
@altergamingteam3 ай бұрын
tfw reality is just a comfyui workflow
@sephia45833 ай бұрын
Is there any similar way to apply Lora style to only specific layer? Maybe we can apply negative weight for composition layer (e.g. layer 3) and positive weight for style layer (e.g. layer 6)?
@mickmack80264 күн бұрын
This is awesome, thank you so much! Could the Mad Scientist also be used with Flux?
@johnsondigitalmedia3 ай бұрын
Awesome work! Do you have the info on the other 10 control index points?
@DarkGrayFantasy3 ай бұрын
As always amazing work Matt3o! For those interested in the Crossattention codes this is what they target: 1) General Structure 2) Color Scheme 3) Composition 4) Lighting and Shadow 5) Texture and Detail 6) Style 7) Depth and Perspective 8) Background and Environment 9) Object Features 10) Motion and Dynamics 11) Emotions and Expressions 12) Contextual Consistency
@stefansotra29343 ай бұрын
Where did you get this info?
@DarkGrayFantasy3 ай бұрын
@@stefansotra2934 Research really, nothing more...
@ceegeevibes13353 ай бұрын
wow cool... thanks!
@walidflux2 ай бұрын
is 12 the 0.0 index ? if there is a more clear description for all these please link it
@Arknight-p2l3 ай бұрын
You are a mad scientist haha thank you so much Matteo
@latentvision3 ай бұрын
mad for sure, scientist not so much 😅
@Arknight-p2l3 ай бұрын
@@latentvision haha 😂 keep up the great work I love your content.
@GG-hh1sl3 ай бұрын
How about a widget setting in the IpAdapter node, to set the strength of each layer with a short lable of its function?
@latentvision3 ай бұрын
we don't know exactly what is the function of each layer unfortunately
@dreammaking5163 ай бұрын
Insanely cool, also just realized, you are italian as well😂🔥
@lonelyeyedlad7693 ай бұрын
Great work as usual, M! I am happy to see that the group experimentation with the UNET layers has led to the development of a node that will give us more control over our generations. Thank you for your continued efforts in this field!
@lucagenovese72073 ай бұрын
07:20 quella roba è fucking insane.
@swannschilling4743 ай бұрын
I'll take the blue pill!! 😁 Thanks so much for this one!! 💊
@michail_7773 ай бұрын
And one more question. Where can I find an explanation of the index/Cross Attention?
@TriNguyenKV2 ай бұрын
when it comes to teaching and concise explaining, you are the GOAT!!!! Thank you so much, please keep doing this. Thank you!
@lijiang-g2s3 ай бұрын
When I Type 3:2.5,6:1 always gives an error: Error occurred when executing IPAdapterMS: not enough values to unpack (expected 2, got 1) File "E:\sd\comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd\comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd\comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd/comfyui/ComfyUI_windows_portable/ComfyUI/custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 763, in apply_ipadapter work_model, face_image = ipadapter_execute(work_model, ipadapter_model, clip_vision, **ipa_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd/comfyui/ComfyUI_windows_portable/ComfyUI/custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 253, in ipadapter_execute weight = { int(k): float(v)*weight for k, v in [x.split(":") for x in layer_weights.split(",")] } ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd/comfyui/ComfyUI_windows_portable/ComfyUI/custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 253, in weight = { int(k): float(v)*weight for k, v in [x.split(":") for x in layer_weights.split(",")] }
@jensenkung2 ай бұрын
7:20 my jaw literally drop
@openroomxyz3 ай бұрын
Thanks that's cool, amazing findings that will help the comunity
@TheCrash150920 күн бұрын
Thank you so much for these tutorials, I was struggling to understand and use Stable Diffusion, but have made so much progress since trying out ComfyUI paired with your tutorials. Please keep making the content you do, it's the best in class!
@AI.Absurdity3 ай бұрын
@StudioOCOMATimelapse3 ай бұрын
Very good as always Matteo. Can you explain all the index please? I've noticed only 3: 3: Reference image 5: Composition 6: Style
@adelechelmany3 ай бұрын
🫡👏👏
@pedrogorilla4832 ай бұрын
Did anyone ever figure out what each block of the Unet does? When I was obsessively trying to understand how stable diffusion work, I went deep into it but could never get a straight answer. Also what processes are involved in each block? If I remember correctly each block has layers within it, with ResNets and other things above my pay grade. If anyone can point to a resource I’d appreciate 🙏
@tofu16873 ай бұрын
... It feels like SD3 is going to have a very hard time
@euroronaldauyeung86253 ай бұрын
genius hacking of cross attention and perfect explanation of the indexing.
@neofuturist3 ай бұрын
UPDATE ALL THE NODES!!!! thanks Matteo
@kinai_44143 ай бұрын
Damn that's impressive. Could the same logic be applied to a Lora node in the future ?
@gammingtoch25912 күн бұрын
Thanks bro! very usefull :D
@legendaryanime693 ай бұрын
Always waiting for your greate video, that help me alot! Thanks
@mariopt3 ай бұрын
Thanks a lot for this new node, really appreciate it.
@bgmspot72423 ай бұрын
Nice❤❤
@GG-hh1sl3 ай бұрын
Just found the node today and was wondering about its use - thanks for sharing the knowledge!
@quotesspace17133 ай бұрын
Thanks, that's really cool 🙏🙏. but Is this just for me? I found almost everything too advanced and couldn't understand what's going on, but I would really love to understand it in depth so that I can add my own to it and share. I do have some knowledge on comfyui but this is...
@latentvision3 ай бұрын
check the "basics" series!
@TheFountainOfEnlightenment28 күн бұрын
Awesome!, the best tuts I've seen. Thanks
@huwhitememes3 ай бұрын
Awesome, Bro
@MikeTon2 ай бұрын
Amazing and insightful work! Question wrt to sponsorship, do you have a preference between github vs patreon? I'm getting so much value here that I want to meaningfully support you and will default to github support if there's no preference
@latentvision2 ай бұрын
hey thanks! I don't use patreon because I don't have time to push updates. Either github or paypal at the moment!
@Sedtiny3 ай бұрын
Thank you again, my lord
@latentvision3 ай бұрын
most welcome, my liege
@nicolasmarnic3993 ай бұрын
Sup buddy! "Workflows are available in the example directory" I dont see the download link :( Please, help :)
@latentvision3 ай бұрын
in the official repository of the IPAdapter
@nomand3 ай бұрын
incredible. Apart from style and composition, has the community found consensus on what specific qualities of the image other indexes affect?
@latentvision3 ай бұрын
not really unfortunately
@denisquarte71773 ай бұрын
"We fail to understand what we already have" - cries in GLIGEN conditioning
@latentvision3 ай бұрын
so true
@baseerfarooqui58973 ай бұрын
hi thanks for this great tutorial. im getting error while executing the code is "" ipadapter object has no attributee 'apply_ipadapter" i tried to using sd15 checkpoints as well sdxl. but getting same.
@latentvision3 ай бұрын
maybe it's an older version, of an old workflow, or simply browser cache
@Mika433443 ай бұрын
W O W!!! AMAZING!
@GiovaniFerreiraS3 ай бұрын
Is this an evolution on the prompt block by block thing? I remember you saying on that video that nothing stopped you from using images.
@latentvision3 ай бұрын
the technology is the same but technically we did this before the prompt injection. Visual embeddings are easier to evaluate
@Freezasama3 ай бұрын
How to use mad scientist? I cant find it :/
@jasonchen11393 ай бұрын
Incredible Content ! your work is undoubted the best !
@madmed6672Ай бұрын
you're literally the goat my guy!
@pandelik34502 ай бұрын
So basically the selectable weight type options are just preset lists of 12 values for each of the 12 blocks?
@latentvision2 ай бұрын
the weights type do a little mode that that but yeah they play with the blocks weight
@divye.ruhela3 ай бұрын
Impeccable naming, we're all a little mad by now 🤣
@rsunghun3 ай бұрын
Absolutely amazing 😮
@krio_gen3 ай бұрын
Unbelievable.
@latentvision3 ай бұрын
believe it!
@krio_gen3 ай бұрын
@@latentvision ))) I dived into it with my head. I feel like a Mad Scientist)
@zhenyu27143 ай бұрын
hi matteo! I wonder if I choose different weight types and set all layers 0 except the sixth layer 1, I foud the result is all the same as default style transfer, is that means the style transfer is sixth layer 1 and other 11 layers 0, and style transfer precise is third and sixth layer are 1 other 10 layers are 0??
@latentvision3 ай бұрын
precise is negative composition (layer 3) and positive style (layer 6)
@autonomousreviews25213 ай бұрын
Love what you're doing for the community - thank you for your time and for sharing :D
@dck70483 ай бұрын
Image gen is a tech that seemed science fiction a couple years ago, but to have refined it to the point people in their homes can casually do generations like 7:19 is nothing short of outstanding. Thanks as always.
@angry_moose943 ай бұрын
I can't find the style transfer precise on the list. Is it the same as "strong style transfer"?
@angry_moose943 ай бұрын
nevermind, just had to update the custom nodes!
@Billybuckets3 ай бұрын
Until I use this a *lot*, I will have no idea what the different UNet blocks do. Maybe you could put a Note node in the pack that contains an estimation of the relative contribution of each block to style, composition, and anything else that might be useful. A++ work as always. Best SD channel around.
@latentvision3 ай бұрын
unfortunately we don't know exactly what the blocks do
@walidflux3 ай бұрын
Again, blowing minds !!!!
@abdellahla61593 ай бұрын
Great node, thanks a lot 😁
@rajahaddadi22743 ай бұрын
@Showdonttell-hq1dk3 ай бұрын
It wasn't hidden, I played around with it weeks ago ;)
@latentvision3 ай бұрын
hidden not invisible :P
@Showdonttell-hq1dk3 ай бұрын
@@latentvision Hahaha, true!:) Thanks again for all the work, the IP Adapter Plus nodes are almost all you need to realise image ideas via ComfyUI. And I agree with you: understanding the given tools and utilising their possibilities is much better in the long run than constantly relying on new technologies that can fulfil our wishes at the push of a button. As fast and dynamic as the whole development is, your work clearly shows that targeted optimisation based on understanding is the right, i.e. practically successful, approach.
@calvinherbst3043 ай бұрын
dying to know what the other index blocks are!
@latentvision3 ай бұрын
don't we all?! 😄
@ryanontheinside3 ай бұрын
this is awesome thank you
@nelsonporto3 ай бұрын
GENIUS
@zheshi98093 ай бұрын
6666
@ceegeevibes13353 ай бұрын
love love love this, going MAD!!!!
@SerginMattos3 ай бұрын
Your work is amazing!
@johnriperti31273 ай бұрын
Thanks Matteo, this is so good!
@chriswendler546411 күн бұрын
Is there a research paper that we could cite already referring to these layers as composition/style layers?
@latentvision11 күн бұрын
not that I know of
@chriswendler546411 күн бұрын
I will cite your video for now
@vf4am2Ай бұрын
This is pretty awesome. Great work! I have a question about the cross attention indexes. Are they tied to output or input blocks in terms of merging? I am wondering if this could help to find the best blocks to merge to for more precision.
@latentvisionАй бұрын
yeah each index is a block, I use index number instead of block number because it's easier
@vf4am2Ай бұрын
@@latentvision thanks for the follow up. I guess what I was wondering though is with block weights, input and output each have 11 blocks. So if I were to equate the effects of each index to a block to merge to, would index 6 equal output block 6, input block 6 or both?
@latentvisionАй бұрын
@@vf4am2 index is a reordered list of blocks input > mid > output. ping me on discord if you need a 1:1 relationship
@davidsmith-lv4kq3 ай бұрын
in the video , at the part where u describe layer 3 , 2.5 as being a negative value, what makes the postive value become negative ?
@PCO773 ай бұрын
if weight_type == "style transfer precise": if layers == 11 and t_idx == 3: uncond = cond cond = cond * 0 The number itself is not converted to a negative number. uncond is set to cond and cond is then zeroed out. You're then increasing the strength of uncond (negative) with higher values in layer 3.
@latentvision3 ай бұрын
negative conditioning, not negative value
@TrungTran-m6i3 ай бұрын
Hi, I pull the latest on main branch but could not see 'style transfer precise', I could only see 'strong style transfer'. Which version are you using? Thanks for the video!
@xulingjie3 ай бұрын
you can update the node to the latest version to see it
@latentvision3 ай бұрын
just refresh the page, it's a browser caching issue
@GamerSulwood3 ай бұрын
Try uninstalling and re-installing, that worked for me
@BubbleVolcano3 ай бұрын
Nice work! ❤It's awesome to see real progress on the U-net layer. But having too many parameters can make it tough to get started, even for someone like me who's been at it for over a year. It's just too challenging for ordinary people. If we change the filling parameter to four simple options like ABCD, it might be easier to promote. Ordinary people aren't into the process; they're all about the end result.
@Nairb9323 ай бұрын
Keep up the good work man
@latentvision3 ай бұрын
I try
@AIFuzz593 ай бұрын
Do you have a list of what the other index layers are? We are experimenting with this now
@latentvision3 ай бұрын
no, it's difficult to undestand. some are subject specific for example (eg: they work with people not with landscapes)