saying thanks want be enough for your hard work. You are a true legend. I Started learning this couple of weeks back. Still learning the Basics..
@scurvion2 ай бұрын
You're the person who presents what we're looking for while we're still trying to understand and think. Thank you, Matteo. We are eagerly waiting for the IP adapter :)
@jtabox2 ай бұрын
Everyone's praising you already but I gotta do it too. Your work with your vids is of really high scientific quality. I subbed to your channel and watched some vids a while ago, but lately I "rediscovered" them and I've been watching many for the 2nd and 3rd time. As I'm training, learning and gaining experience, the same video can provide different information, depending on my level.
@TheAutisticRebelАй бұрын
I am having the same experience. What a difference time and experience make!
@azzordiggle51432 ай бұрын
The man just presents small tricks casually, I had to pausing the video every few seconds and try to learn
@JT-cs8wb2 ай бұрын
Great video! 14:40 I've been using 1.0 for all my training samples and I couldn't figure out where I was going wrong. Thanks!
@KarlitoLatent2 ай бұрын
thanks, I was cracking my head with the noise problem... at the end I just removed noise with latent sdxl upscale... now I know its max_shift - base_shift ratio 😁
@RyanGerhardtYimimotoАй бұрын
Very informative, thank you! Gave me a lot more understanding under the hood and what all the parameters actually do haha
@mariokotlar3032 ай бұрын
Thanks for doing such thorough research and reporting on it in such detail. And thank you for mentioning that existing IP Adapter. Seeing how quickly it was released and that you weren't involved made me not even bother trying it, guess I was right to assume it wasn't very good. Waiting for the real deal as long as it takes ♥
@sendorian2 ай бұрын
You’re incredible! So much work, well analyzed and perfectly presented! This has brought me so much new knowledge I have to think about. I just love your content!
@HiProfileAI2 ай бұрын
Thanks once again. I just love the way you humbly and honestly share your knowledge and experiences with the community. This is truly appreciated.
@ytkeebOo2 ай бұрын
The most useful video I watched until now about flux and comfy. Thanks.
@leolis782 ай бұрын
Hi Matteo, thanks for the second Flux video, your in-depth analysis and almost surgical tests are amazing. Thanks for sharing your knowledge with the community! 👏
@DranKof2 ай бұрын
Thanks for this video so much. I've been wanting to compare values for image generation somehow but hadn't found a good way yet. This is the perfect!
@AmrAbdeenАй бұрын
the amount of work put into this video is mindblowing, thank you sir
@CrunchyBagpipe2 ай бұрын
I'm throwing a celebratory party when you release IPAdapter for Flux, there will be cake.
@latentvision2 ай бұрын
save me a slice!
@Wodz302 ай бұрын
@@latentvision the cake is a lie
@latentvision2 ай бұрын
@@Wodz30 noooooooooooooooooooooooooooooooooooooo!
@TheAutisticRebelАй бұрын
I'll bring the digital cake... its endless!
@dale659812 ай бұрын
Incredible work. Invaluable information as always! I was struggling with the exact same thing and then you post this absolute gem of a video.
@TheAutisticRebelАй бұрын
Wow sir. Wow. You are a special kind of angel. As someone who uses your *_truly essential products_* in their workflow I am in *awe* of you!!! What AN ANGEL onto this community you are!!! I am trying to get pasted the flood of basic videos flooding YT... You are the THE SIREN SONG captain wearing your blindfold leading the space forward... Absolutely *_UNFATHOMABLE_* CREDIT GOES TO YOU!!! YOU ARE AN *ANGEL!*
@KDawg50002 ай бұрын
I sort of did my own testing early on w/Flux and settled on 25 steps to be my go-to standard.
@latentvision2 ай бұрын
that seems a good standard
@AliRahmoun2 ай бұрын
Never miss a video by Matteo! thank you for sharing your research!
@Firespark812 ай бұрын
Fantastic as always. Thank you!
@ArrowKnow2 ай бұрын
Always look forward to your deep dives that teach us how to use the latest tech. While I don't understand most of it, you help me to understand enough to experiment. I'll add my thanks to everyone else's
@Tusanguchito2 ай бұрын
Hey Mateo, thanks for sharing as always! Offtopic, How are you doing? I got wondering if something changed in your voice and if it's just the mic. Anyways, just checking up if you're good. I want to let you know I admire you and your hard work provides tons of value. Thanks for all your do and let us know, the audience, whatever you need or want from us 🙂 you really rocks
@latentvision2 ай бұрын
lol, thanks yeah a little of sore throat, thanks, nothing too bad but it took a little more effort to talk
@VadimMakarev3 күн бұрын
OMG! You have done a gigantic job! This is really helpful, thank you!
@vieighnsche2 ай бұрын
Thank you so much for your content. Still a bit confused about shifts. And I'm a big fan of your essentials pack
@latentvision2 ай бұрын
shift kind of moves the target where the image converges. more shift will require more steps to denoise but if you add a little you may end up with more details (because the image has more noise)
@gohan20912 ай бұрын
@@latentvision Is this relation to max shift or base shift? (or both?)
@Atomizer742 ай бұрын
Certainly been interested in flux since it got released, I wish I could do more but unfortunately each image takes me about 5-10 minutes under normal circumstances, so it is hard for me to run mass testing like I did for SD1.5 and SDXL, so this video presented me with many of the kinds of testing I would normally do, so, Thank you for that! :)
@PulpoPaul282 ай бұрын
you are simply the best. there is no one even near. GOAT
@lgas07able2 ай бұрын
incredible work, thank you for all this. I myself have been trying to figure out this model for almost a month, my conclusions are about the same as yours, the model is very unstable, very specific and you can’t just work with it. the same applies to training, training they seem successful at first glance But after we start checking this already in the work process, everything starts to fluctuate from side to side, it depends on the seed and the planner and the number of steps and the image resolution and many other factors. thank you again, I appreciate your work.
@AustinHarttАй бұрын
🎯 Key points for quick navigation: 00:00:00 *⚡ Testing Flux Model with Various Parameters* - The speaker discusses challenges in achieving consistent results with the Flux model. - Tested combinations of samplers, schedulers, guidance, and steps on multiple setups. - Initial findings indicated a complex model behavior with variable results. 00:05:05 *🖼️ Optimal Number of Steps for Image Convergence* - Portraits need up to 40 steps for clear and detailed convergence. - Different prompt lengths didn't significantly affect convergence. - Simple prompts might stabilize quicker, but detailed prompts need more steps. 00:09:08 *🔄 Evaluating Samplers and Schedulers* - Multiple sampler and scheduler combinations were tested across different step thresholds. - Certain samplers and schedulers better handle noise or detail at fewer steps. - No specific sampler outperforms others universally; it's subject-dependent. 00:11:16 *🎨 Handling Illustrations and Style Adjustments* - Illustrations vary significantly in style across samplers and schedulers with step adjustments. - Higher steps shift illustration style from clip art to detailed digital art. - Illustration outcomes are subjective and vary with personal taste and required detail. 00:13:42 *🧭 Guiding Principles for Adjusting Guidance* - Default guidance is reliable but can be adjusted for better results depending on the artistic need. - Lower guidance can lead to washed-out styles which might be desirable in certain scenarios. - Overuse of guidance might lead to hallucinations, especially in text representation. 00:16:21 *🌀 Role of Shift Parameters in Image Quality* - Base and Max shift impact noise and clarity, with distinct variations across image sizes. - High shift might introduce noise but also increased detail; ideal around default values. - Shift can subtly enhance or negatively affect details and artifacts. 00:19:18 *🔍 Experimenting with Attention Patching* - New Flux attention patching technique allows more control over image blocks. - Adjustments to query, key, and value can lead to varied artistic results. - The exact function of each block remains complex and exploratory. 00:23:12 *💬 Conclusion and Future Potential of Flux* - Flux is intricate, often offering rigid results but providing potential within certain domains. - Current limitations exist, awaiting enhancements like better adapters. - The dataset holds promise, yet practical utility requires further developments.
@GrunachoАй бұрын
Thank you very much for the very useful video! Always learning a lot. Keep the amazing work. Looking forward for your ipadapter 👍
@LuckRenewal25 күн бұрын
yes, your video is always useful! thank you so much for explaining each parameter
@robadams24512 ай бұрын
Thanks Matteo, such a lot of useful information, it will take me a while to process!
@bentontramell2 ай бұрын
This guy knows how to do a Design of Experiments
@destinpeters98242 ай бұрын
Thank you Sir!!! Looking forward to the next video!!
@davidb80572 ай бұрын
Thank you Matteo. That's one of the most useful video I've seen about Flux.
@PsychedelicCyberpunk2 ай бұрын
I appreciate all of the work that you put into this and everything else you do for this community. Thanks man!
@TomHermans2 ай бұрын
thanks for doing these comparisons. good job
@ac2812012 ай бұрын
Thank you for your research on this!
@mickmack80262 ай бұрын
Thank you very much for the hard detailed work!
@ArchambeauCАй бұрын
Thank you very much for your "avant-garde" work!!!
@wellshotproductions65412 ай бұрын
You are legend. Thank you for your hard work!
@electronicmusicartcollective2 ай бұрын
Thanx 4 sharing. We was short before to walk this road, too.
@zeyanliu27752 ай бұрын
very good experiment! I will go back to this when i try flux next time
@sandnerdaniel2 ай бұрын
Great video and research, thanks! There are many unsolved mysteries around Flux. For instance, it struggles with some styles, although it can generate them perfectly under certain circumstances. Also some quantization perform better for some tasks... Dev model also seems to understand prompts differently, and so on.
@latentvision2 ай бұрын
styles work relatively well with very short prompts, as soon as you add more tokens Flux goes ballistic and steers towards the same few styles it likes
@sandnerdaniel2 ай бұрын
@@latentvision True, at some point the model gets completely stubborn (or 'rigid'), forcing you a couple very similar styles. You can somewhat circumvent this in img2img (so an IP adapter could help here I guess), it is clear the model can do a variance. Using very short prompts is not always an option. Also I am using various models and quantizations for tests, and some even feel like completely different models and require a different approach too. I will dig into this more.
@sirg17642 ай бұрын
@@sandnerdaniel I might be wrong but I feel the position of the keyword matters, it feels its strongest at the beginning of the prompt, also tried repeating or adding synonims to try get out of the stubborness, not sure if its doing much but seems to help
@VintageForYou2 ай бұрын
You know FLUX inside-out fantastic work.👍💯
@hayateltelbanyАй бұрын
" I tested the hell out of Flux " xD that made me laugh so hard, thanks for the guide as always
@lenny_Videos2 ай бұрын
Thanks for the research and video. Very interesting 😊
@moviecartoonworld44592 ай бұрын
I am always grateful for the wonderful videos. There is one thing. 🥰
@wakegaryАй бұрын
thanks for takin on that bull!
@Neblina19852 ай бұрын
this laboratory was... amazing! Ty a lot!
@netandif2 ай бұрын
Thx a lot for this extensive review 😎
@Searge-DP2 ай бұрын
Great video, as always. I did the research about using Clip Attention Multiply originally that Sebastian presented in his video as a way to improve the image quality and prompt following. And on average it had a measurable positive effect on the number of generated images that improved with this trick. After watching this video I also did a new matrix and compared the original image not only with images that were generated by only "boosting" Clip-L but also with images that were generated with "boosted" T5 conditioning through your attention seeker node. Once again I saw a slight increase in number of improved images when only changing the attention for Clip-L (14%), but a higher number of images that got worse when changing the attention for T5 (30%). So my conclusion is to use Clip Attention Multiply to only change the Clip-L attention, but leaving T5 untouched (with only Clip-L "boost": 50% good 50% bad / with T5 "boost": 40% good 60% bad). In both cases (changing only Clip-L attention vs also changing T5 attention) there were also a number of images where it made the resulting images worse to do that, but in the latter case the chance to make it worse was twice as high as in the former case (14% when only changing Clip-L vs 30% when also changing T5).
@latentvision2 ай бұрын
T5 has a higher impact on the image, it's hard to quantify "better" or "worse". Depends on your subject and your desired result. The attention seeker lets you target single blocks, the multiplier goes ballistic on all. It's more complicated to use but more precise. Up to you, use whatever works for you.
@jonmichaelgalindo2 ай бұрын
Feeding long sentences to Clip-L destroys Flux's performance. Run a test with the Flux dual prompt node, leave Clip-L empty, and compare the results. The quality boost is staggering. And then, forget that and go grab the awesome finetune of Clip-L from HF, and you'll get slightly better performance than with leaving Clip blank. Thanks for the data! ❤
@kenwinneАй бұрын
Your point is not very accurate, if you have used an LLM-like model in comfyui to expand the prompt words, you will find that the more prompt words, the closer the picture details and composition you need. Therefore, long sentences will not destroy Flux's performance, and high-quality long paragraphs will make Flux play better.
@joshuadelorimier1619Ай бұрын
best 6 dollars I've ever spent thank you.
@nicolasmariar2 ай бұрын
amazing video thank you a lot. It came just as I needed to test my latest flux lora
@afipdjh2 ай бұрын
Thank you for your great research!
@carlosmatias50792 ай бұрын
Great, as usual ! Thanks a lot.
@barsashForAI2 ай бұрын
I haven't seen you videos in awhile, great content easy to understand as usual, i noticed your Comfyui UI, so minimal i like it, everything at the button, is that a theme ?
@latentvision2 ай бұрын
hey thanks! it's the new beta menu, you can activate it from the preferences
@barsashForAI2 ай бұрын
@@latentvision ohh, how did i missed that, thank you
@BuckleyandAugustin2 ай бұрын
Thanks Matteo! Made my day
@weirdscix2 ай бұрын
Another Matteo video, I'm here for it
@u2bemark2 ай бұрын
Very nice. Thanks for sharing your research.. yet again.
@euchale2 ай бұрын
Thanks so much for this video!
@seifalislam36452 ай бұрын
thanks for this advanced information, waiting for iPadapter to celebrate
@kaibuddensiek85802 ай бұрын
Thanks so much, you are a hero.
@2PeteShakur2 ай бұрын
can't wait for faceid2 flux - you are the best! ;)
@cabinator12 ай бұрын
Thank you sir!
@earthequalsmissingcurvesqu93592 ай бұрын
In the halls of eternity it is murmured that once the legend delivers IPAdapter for Flux, Humanity shall reach Enlightenment. :D Thank you for your work.
@joachimhagel46762 ай бұрын
thanks for the awesome job, that's amazing! Could you address Flux upscaling (latent and model) and possibly some other methods (tiled diffusion etc...) in your next video? It seems like Flux behaves differently compared to other models.
@latentvision2 ай бұрын
yeah I guess we need to talk about upscaling sooner or later :P
@B_Linton2 ай бұрын
Amazing job! How do you feel this affects your choice of use between models. Seems there is still a solid place for SDXL at the moment for flexibility and control of style though at a loss for prompt adherence and ai artifacts. How would you describe the use case for top models currently? (though I realize still so much to still understand about flux)
@latentvision2 ай бұрын
that depends on the use-case. it's impossible to answer in a general way. For example if you do animations with animatediff you are still going to use SD1.5. Probably the main problem of flux at the moment is the hw requirements in a local environment. The entry point is pretty steep. SDXL at the moment has a wider ecosystem of course, so we will be using it for a while still
@B_Linton2 ай бұрын
Fair point. Most of my use with Flux has been limited to RunPod on an A40. A lot of my focus has been on exploring environmental, architectural, and interior design prototyping and concept inspiration. I’ve been trying to keep up with the Flux hype cycle, but poor ControlNets and currently available tools have slowed the discovery process. However, experimenting with the Flux model has been enjoyable, especially with its prompt adherence and seemingly fewer AI artifacts. Your IP Adapter videos have been immensely helpful in becoming comfortable with SDXL, which I find myself returning to for speed, comfort, and control. Thanks for all you do!
@ttul2 ай бұрын
Fantastic work, Matteo.
@MichaelReitterer2 ай бұрын
Always a pleasure, I have to ask when generating the 6 images, are you using the 6000 or the 4090? I ask because I have a 3090 which in comparision crawls along!
@latentvision2 ай бұрын
the 6000 was used only with the huge generations (over 60 images at a time)
@CrunchyBagpipe2 ай бұрын
Thanks! I trained a lora for 8mm film which has a super high texture look that is very hard to steer flux towards. I found the only way to get accurate film grain for an 8mm look was with a max shift of 0 and base shift of 8. It seems that high base shifts are good when you want a more analog / degraded look.
@yanks694202 ай бұрын
insanely good video
@paulotarso44832 ай бұрын
mateo you are the greatest
@latentvisionАй бұрын
no, you are!
@AI.Absurdity2 ай бұрын
Great job as always!
@marcihuppi2 ай бұрын
thank you for your hard work ♥
@Cyberdjay2 ай бұрын
Great information. Is there an sdxl node for the sampler parameters?
@latentvision2 ай бұрын
yeah I'm thinking of doing that
@cyberfolk10882 ай бұрын
quality content as always from Matteo, not everyone understand how good is this video.
@walterriboldi42552 ай бұрын
Thanks Matteo, great job. I learned a lot by following your videos, it was a bit complicated at first but now I understand many things. Do you think it is possible to improve the style matching using the method explained with individual UNet blocks when an IPAdapter becomes available.
@latentvision2 ай бұрын
it's not a unet of course, but yes I believe it should be possible
@LeiluMultipass2 ай бұрын
It would be great if you could talk about flux style training. It's a subject we don't see much of, and never treated as well as on your channel.
@thyagofurtado2 ай бұрын
Argh! So many variables!!! I wish we could prompt an LLM with machine vision/clip interrogator so that we can add a prompt to what we want and then it would run these tests with all the variables we allow to be open, so that it would self calibrate. Like for example if you had in your must-haves that the girl must wear a green jumper with leather straps and rose-gold buckles. Then you put your prompt, and the LLM layer between you and Comfy would run tests and check the outputs, calibrating all these variables until it got minimal deviation. Awesome video by the way, very good empirical approach. Thank you so much for your work and sharing these results!
@Noaixs2 ай бұрын
Thanks for that! Did you try the guidance limiter nodes? I don't know if it's a placebo, but I sometimes get really good results.
@АртемАнісімов-ч3цАй бұрын
Hello, thank you for your research, it was interesting. It would be nice to add at different sizes. According to my observations, there are differences in composition at 1, 1.5 , 2 megapixels
@latentvisionАй бұрын
true, but you know... it takes time to repeat everything at higher resolutions. I would hope what is valid at 1M is also valid at 2M
@schonsense2 ай бұрын
The Euler sampler actually adds back in noise after each step, and thus denoises more than 100%, If you have it set to a simple schedule of less and less noise added back in , that would explain the "tiering" convergence you were seeing in the first part of your video. Did you try with a more deterministic sampler?
@dilshadsardardeen2 ай бұрын
Love your work, I have learned a lot. Could you please tell the spec of the machine you work on these workflows, please?
@latentvision2 ай бұрын
the workflows for this specific video were generated on a 4090 24gb (the simple ones) and on an rtx6000 ada 48gb (the complex ones). It took days to extract all that data.
@dilshadsardardeen2 ай бұрын
@@latentvision thanks!
@gnsdgabriel2 ай бұрын
I was missing you 🙏.
@quercus32902 ай бұрын
very practical
@latentvision2 ай бұрын
best compliment!
@banzai3162 ай бұрын
Thank you, Matteo 🔥 goodies & awesomeness!
@beatemero67182 ай бұрын
Matteo, you are a effing Mad man! 💚
@latentvision2 ай бұрын
I know... and what you see on screen is like 5% of all the images I generated for this research
@beatemero67182 ай бұрын
@@latentvision Thats Insane! Thanks for all the work and info, man! It helps a lot.
@freneticfilms72202 ай бұрын
Hey Mateo, great video. I can only imagine the work that goes into this. I have a question: I am experimenting with DEIS (Sampling method) and KL Optimal (scheduler) in Forge and it does give me stunning results compared to others. But then i slightly change the prompt and it suddenly looks terrible. I feel FLUX is very sensitive about certain keywords and changes output style really randomly compared to SDXL for example. I wonder what that is that makes FLUX so unpredictable. People say FLUX likes long prompts, maybe...but at a cost that i find to high.
@stereotyp99912 ай бұрын
Thank you And also thanks to your GPU😂
@latentvision2 ай бұрын
poor little fella...
@GradeMADEАй бұрын
Hey @Mateo can you upload this workflow?
@GSXNetwork2 ай бұрын
Great job! Are the images available for download?
@latentvision2 ай бұрын
give me a holler on my discord if you want to check the images
@GSXNetwork2 ай бұрын
@@latentvision is it matt3o?
@afterlif39272 ай бұрын
When InstantID for FLUX model will be released?
@latentvision2 ай бұрын
it's quite expensive :) it might take some time but it will come
@afterlif39272 ай бұрын
@@latentvision Oh, that should be great!
@morozig2 ай бұрын
Hi Matteo! Thanks for the video, it was interesting! I have an unrelated question though. I think many Stable Diffusion users want drawn characters consistency, but it is really hard. Are you aware if any model or library creators tried to address this problem directly? Maybe you can explain why it is so hard to create an IP-adapter variant that would make images based on one example of drawn character just like instant-id do for faces? Do you think it's even possible within Stable Diffusion?
@bgmspot72422 ай бұрын
Keep making more videos please🎉🎉
@martin__n-i8f2 ай бұрын
nice video!
@max49aАй бұрын
Incredible work! the Plot Sampler parameters is such a useful feature, however I'm unable to get the Prompt to appear on the output when "add_prompt" is set to true
@latentvisionАй бұрын
remember to use the dedicated node for prompting, otherwise it won't work
@kaibuddensiek85802 ай бұрын
Danke!
@andreapavone94232 ай бұрын
I tuoi video sono fantastici, molto ben fatti e super utili però commento solo per dirti quant’è fico lo sticker di d&d
@latentvision2 ай бұрын
I know! I want to print it now :)
@andreapavone94232 ай бұрын
@@latentvision You totally should!!!
@ApexFunplayer2 ай бұрын
I broke down what each T5 COULD do in a small article on Civit based on a bit of image based outcome analysis, but it's genuinely hit or miss so don't follow it closely. Feel free to correct anything.