Get Better Images: Random Noise in Stable Diffusion

  Рет қаралды 2,877

Andrea Baioni

Andrea Baioni

Күн бұрын

Are your Stable Diffusion generations not as great as MidJourney's? Discover how a tiny bit of random noise can make a big difference in image quality!
In this episode of Stable Diffusion for Professional Creatives, we'll show you how to improve your images using random noise, whether through ControlNet or latent manipulation.
Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
Workflow: openart.ai/workflows/rGKT2dp6...
Install the missing nodes via Manager by importing the workflow.
I'm using epicRealism as a checkpoint, but you can use any other 1.5 or SDXL model.
epicRealism: civitai.com/models/25694/epic...
Timestamps:
00:00 - Intro
00:59 - Visualizing Noise
01:57 - Noise inside of comfyUI
02:56 - Thinking about Rules and Sandboxes
03:48 - Noise Driven ControlNet: Perlin Noise
04:32 - Noise Driven ControlNet: Perlin and Gradient
05:06 - Noise Driven ControlNet: Perlin, Gradient and Sketch
05:45 - Noise Driven Latents
06:16 - Noise Driven ControlNet and Latents
06:35 - Workflow design Philosophy: Rules
08:36 - Wokrflow Breakdown
13:10 - Final Considerations
14:26 - Outro
#stablediffusion #stablediffusiontutorial #randomnoise #noise #ai #generativeai #generativeart #comfyui #comfyuitutorial #risunobushi_ai #sdxl #sd #risunobushi #andreabaioni

Пікірлер: 31
@OriBengal
@OriBengal 11 күн бұрын
I hope people really listen... really *grasp* ... what you're saying about discovery through experimentation... to figure out what these things do... and why this is important for real studio work. Too many people just "cut/paste" (grab a workflow, put in a new prompt). Great job!
@risunobushi_ai
@risunobushi_ai 10 күн бұрын
Thank you for the super kind words!
@pixelcounter506
@pixelcounter506 12 күн бұрын
Always very interesting to listen to your findings, Andrea! I like your idea of taming noise injections and giving it some structures. One could even enlarge your concept to implement colors, too. From my point of view it is always more interesting to play around with (even copied) ideas, workflows, new nodes than to go only with mainstream concepts. Keep up your professional work! 🙂
@risunobushi_ai
@risunobushi_ai 11 күн бұрын
Thank you! Yes, depth doesn't care too much about colors, that's why I used the latent noise injection, but you can experiment with colors even more!
@Mranshumansinghr
@Mranshumansinghr 13 күн бұрын
I never miss any of your videos. Best Comfi Knowledge on KZbin.
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
Thank you!
@dtamez6148
@dtamez6148 13 күн бұрын
Your closing statement is not only brilliant, but spot on! "Stable diffusion for professionals' indeed! 👏
@maxehrlich
@maxehrlich 13 күн бұрын
Maybe your best video yet! While not as technical as relighting, the philosophical aspect of why we are doing what we are doing is even more important than technique to make compelling images.
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
Thanks! It's been a while since I wanted to make a video like this one, because I think workflow design is the same as any design work. Having a philosophy and a course of action behind what you do is one of the most important things imo.
@Al-KT
@Al-KT 13 күн бұрын
For visualizing noise in blender (1:32) use Color output of the noise texture instead of Fac. That way for offset on each axis it's gonna give slightly different result. Now all of the offset is same for all axis, in the direction of the vector (1, 1, 1)
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
It's been a hot minute since I worked with geo nodes in Blender. I plugged it in and I debated looking up a guide as soon as I saw it wasn't displacing along the normals, and I said "eh, it's just to visualize stuff, that's fine". But yeah, absolutely, affecting the offset alongside the each face's axis would be the correct way of doing it!
@nhatthibui6491
@nhatthibui6491 12 күн бұрын
I have never missed any of your videos because what you do is very practical and highly applicable to my work.
@risunobushi_ai
@risunobushi_ai 12 күн бұрын
Thank you!
@AbsolutelyForward
@AbsolutelyForward 13 күн бұрын
A very good example of how you can design "differently" with generative ai ... and must, if you really want to utilise the full potential - bravo :) Your tutorial reminds me of an experiment in which I used the Lumetri scopes (Luma waveform) from Adobe Premiere as an image prompt. It would certainly be interesting to capture the "moving" live Luma waveform via screen capture node and link it to AnimateDiff or a real time generation workflow.
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
Exactly! The most interesting stuff we find when using new tech is very rarely found while playing it safe
@melodyhour
@melodyhour 2 күн бұрын
love your videos!
@risunobushi_ai
@risunobushi_ai 2 күн бұрын
Thank you!
@hakandurgut
@hakandurgut 13 күн бұрын
Great tutorial.. just like the way noise is used in touchdesigner.
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
random noise is truly the gift that keeps on giving across all software
@AB-wf8ek
@AB-wf8ek 12 күн бұрын
Coming from 3D animation, noise is also a common tool
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
Why no SD3 video? Well, because it's not interesting to me production-wise or even as a base for experimenting production-related stuff. Apart from anything that can be said - and has been said - about SD3, I think it's too early to both taking it into consideration for production related tasks, and jumping to conclusions in terms of how good or bad of a model it ends up being. I'll probably talk about it when - and if - we'll get a complete set of controlnets, finetunes, and accessory modules like IPAdapter or IC-Light. In the meantime there's so much stuff left to explore with 1.5 and XL, and there's so many great channels and videos who'll cover SD3, that I don't think that the lack of my voice on the matter will be missed.
@fernandopain4824
@fernandopain4824 13 күн бұрын
That´s why I follow your channel. Thanks!
@dtamez6148
@dtamez6148 13 күн бұрын
I agree. And, apparently, there is some questionable issues with it's license agreement as well😒
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
The funny thing is I have a law degree (albeit from an Italian Uni), so even if I'm in no position to give counsel on it, I'd be able to make a breakdown of the license agreement. But either way SAI should just release a simple statement disclosing to the laymen what they expect out of finetunes. Finetuners, coders, and the community members in general are not corps, they shouldn't need to have a legal team in order to understand what they can and can't do.
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
@fernandopain4824 thank you, I really appreciate the sentiment
@stephenmurphy8349
@stephenmurphy8349 13 күн бұрын
Nice approach!
@BrunoMartinho
@BrunoMartinho 11 күн бұрын
Sadly I'm getting an error: Error occurred when executing ColorPreprocessor: No module named 'controlnet_aux.color'
@risunobushi_ai
@risunobushi_ai 11 күн бұрын
It might be because you're missing the auxiliary preprocessors. You can find them here: github.com/Fannovel16/comfyui_controlnet_aux
@ronilevarez901
@ronilevarez901 13 күн бұрын
Definitely not using confiUi any time soon. Not for me at all. Why wasting so much effort making something with all those intricate and confusing entangled lines when I can get an almost identical result in seconds using automatic? Yes , confi gives a lot of control, apparently, but that control is not necessary to achieve great results with good prompting and other techniques. All the super innovative methods developed for Comfi that I've seen are easy to imitate with other tools, even in command line, so I'll pass. I wish people could see it too so they would focus on improving other tools instead of Comfi. (Although, the idea of using custom noise to influence the generation is great).
@risunobushi_ai
@risunobushi_ai 13 күн бұрын
Well, that's easily said: I personally like node based interfaces much more than standard web UIs and CLI. I spent a lot of time learning Houdini and Blender Geo Nodes, so it sort of comes natural to me, as to many others. I'm all for having different interfaces for different users, so I prefer having the option of choosing which one to use depending on the task and on the kind of use I want to make of it. Also with comfyUI I can spend time working on the "perfect" environment I want to set up, in order to automate the generations, and that's something I could do to a degree with other UIs, but it's much easier in comfyUI for me. It all comes down to personal preferences, I think!
@AB-wf8ek
@AB-wf8ek 12 күн бұрын
It's one thing to not use ComfyUI because it's not useful to you, but to claim it's a waste of time for everyone else is ridiculous. I use it for animation, and there are a million things that it's better for than with Auto1111. For example, access to experimental nodes, performing transformations on image maps, muting processes with boolean switches, generating proper looping frame interpolation, propagating single images to multiple inputs at the same time, doing multi-step upscaling, animateDiff outpainting & upscaling, quickly swapping inputs for multiple inputs, isolating parameters to a single location, organizing complex workflows, etc.
Multi Plane Camera Technique for Stable Diffusion - Blender x SD
16:06
БОЛЬШОЙ ПЕТУШОК #shorts
00:21
Паша Осадчий
Рет қаралды 5 МЛН
6 FREE AI ARCHITECTURE RENDERING Tools Compared | Step-by-Step Guide
13:59
Any Node: the node that can do EVERYTHING - SD Experimental
17:16
Andrea Baioni
Рет қаралды 5 М.
zeptocore & ectocore & ectocore
2:03
infinitedigits
Рет қаралды 601
ComfyUI AI: Inject reality into your animations
7:56
Show, don't tell!
Рет қаралды 1 М.
Что не так с Sharp? #sharp
0:55
Не шарю!
Рет қаралды 95 М.
Игровой Комп с Авито за 4500р
1:00
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 1,4 МЛН
Samsung S24 Ultra professional shooting kit #shorts
0:12
Photographer Army
Рет қаралды 35 МЛН