ControlNet Depth Explained - Full Tutorial // easy stable diffusion ai

  Рет қаралды 13,671

CreatixAi

CreatixAi

Күн бұрын

Пікірлер: 41
@markallangoldmoon8590
@markallangoldmoon8590 2 күн бұрын
What a sweet, lovely-sounding voice and great pronunciation. Many KZbinrs who, for example, cut out all pauses in speech with an audio editor until it sounds unnatural and annoying, and after a few sentences one is already sick of it, could take this as an example. You have a great speech melody and rhythm so that one can follow the words you say well and listen attentively with pleasure. Thank you also for your great contribution.
@clementcardonnel3219
@clementcardonnel3219 8 ай бұрын
The quality of this tutorial is insane. Truly underrated looking at the number of views. Keep it up, we're learning so much!
@m0ose0909
@m0ose0909 9 ай бұрын
Thank you for the detailed and slow paced tutorial - way more thorough and easy to follow than others I've seen.
@CreatixAi
@CreatixAi 8 ай бұрын
Glad it was helpful!
@MariaMac-h8x
@MariaMac-h8x 9 ай бұрын
You make The Best tutorials on Stable Diffusion! Thank you!
@CreatixAi
@CreatixAi 8 ай бұрын
Wow, thanks!🙏🏻🥰
@楊明諺-i1q
@楊明諺-i1q 4 ай бұрын
Very detailed tutorial. Really helpful for beginner of stable diffusion. Great work! And please make more video for us!
@alecubudulecu
@alecubudulecu 11 ай бұрын
Appreciate the in depth guide on specific aspects of SD. Good job with this
@CreatixAi
@CreatixAi 11 ай бұрын
Thanks! 🥰
@the_trevoir
@the_trevoir 9 ай бұрын
I thought your voice was AI for about 5 seconds. You should take it as a compliment, your voice is perfect, the tone is perfect, and you enunciate like a mofo. What you *don’t* do that generated voices almost always do, is screw up basic pronunciation. I loved this video, I got more out of it than the last 10 SD videos I watched put together. I’ve subscribed because I’m hoping you have a video which will help me with the bane of my ai existence- controlling colour.
@CreatixAi
@CreatixAi 8 ай бұрын
Your comment made my day 🥰 Thanks so much! I am hoping to go through all of the ControlNet models at some point, I’m sure there’s something very useful for color too ;) What exactly do you have trouble with? The general color? Or color of certain parts? (If parts, have you tried fixing it with inpainting?)
@the_trevoir
@the_trevoir 8 ай бұрын
@@CreatixAi So many things to learn! I have a handle on basic text-to-image, image-to image, basic inpainting, controlnet posing with pose, canny, depth, and ip-adapter full face. I still need to learn control-net inpainting. I haven’t tried inpainting for colour much (eyes a couple of times) because I always feel like there is some method which is evading me- a combination of prompting and syntax (breaks, punctuation etc) which would give me the results I want. Maybe there just isn’t yet. The big problem I have is colour bleed. Me: SD, please give me blue eyes. SD: Okay, but guess what, I’m going to also make the jacket, chair, curtains, coffee table, and the cat blue.
@OptimalToast
@OptimalToast 11 ай бұрын
Appreciate the guide, and the tip about playing with compositions, hadn't got around to playing with that, I definitely should going by the examples you displayed, some very interesting results. 😎
@CreatixAi
@CreatixAi 11 ай бұрын
Ah, it’s my favourite! Hope you have fun with it ☺️
@chakhmanmohamed9436
@chakhmanmohamed9436 Ай бұрын
Please, any use case at a foreground image is blended with background? Adding illumination and shadows. Thanks alot
@ricogfx4212
@ricogfx4212 10 ай бұрын
Can u bring up a video on upscaling our art to extreme high quality
@LewGiDi
@LewGiDi 11 ай бұрын
Thanks a lot for this tutorial, I sometimes use controlnet because I am not to familiar with it and afraid to mess up things. This video tutorial was very useful. Do you have any plan to make a tutorial about upscaling?
@history3042
@history3042 6 ай бұрын
Quick question. If I have an existing photo with an existing segmentation, depth and even normal map. Is there a way use them all as inputs (instead of estimations) and using them all simultaneously in the calculation rather than using them seperately (depth only, mask only etc)
@coulterjb22
@coulterjb22 10 ай бұрын
Nicely done! Your explanation was perfect. Im working on depth maps to pull as much detail as possible for laser engraving but I have yet to find something that doesn't require a lot of work in Photoshop. Leres++ helps as well as increased sampling steps but not really. Anything else I could try?
@joshualloyd6694
@joshualloyd6694 11 ай бұрын
Could you do any workflow builds and talking about this in ComfyUI? Great video!
@TDMIdaho
@TDMIdaho 11 ай бұрын
Excellent explanation, thank you!
@keithtam8859
@keithtam8859 8 ай бұрын
sorry, may be you have answered in the video... can i use a 3d package like blender, create depth map and use that depth map to generate an image with controlNET? thanks
@LostDarkCloud
@LostDarkCloud 5 ай бұрын
Ok, so I was playing around with this. If you didn't know find out, this is how I did it: 1. Check allow preview. 2. Click the "Edit P" option, the "P" is the PhotoPea icon. 3. Copy YOUR Depth Map image that you want to use and paste it into the PhotoPea Editor. You may have to adjust the canvas size to fit it properly. The import thing is to make sure your layer name matches the name it is looking for. I couldn't get it to work otherwise. So, your Custom Depth Map layer in the list at the right needs the same name as the unit you are using. Example: "unit-0" if you are using Control Net Unit 0. 4. There's a button at the top of PhotoPea UI which says "Send to ControlNet". Hit it. 5. It should load your Depth Map into Control Net and use it. If this worked for me, it should work for you.
@Nick-tl7ts
@Nick-tl7ts 6 ай бұрын
this is great! what if I have a generated depth map as a pass from a 3d software and I dont need estimation? should I put it into controlnet and thats it?
@LostDarkCloud
@LostDarkCloud 5 ай бұрын
Ok, so I was playing around with this. If you didn't know how, this is how I did it: 1. Check allow preview. 2. Click the "Edit P" option, the "P" is the PhotoPea icon. 3. Copy YOUR Depth Map image that you want to use and paste it into the PhotoPea Editor. You may have to adjust the canvas size to fit it properly. The import thing is to make sure your layer name matches the name it is looking for. I couldn't get it to work otherwise. So, your Custom Depth Map layer in the list at the right needs the same name as the unit you are using. Example: "unit-0" if you are using Control Net Unit 0. 4. There's a button at the top of PhotoPea UI which says "Send to ControlNet". Hit it. 5. It should load your Depth Map into Control Net and use it. If this worked for me, it should work for you. Now if you didn't need to know this and you know a better way, feel free to share.
@oseaniic
@oseaniic 2 ай бұрын
THANK YOU, SO MUCH!!!
@CGFUN829
@CGFUN829 4 ай бұрын
amazing stuff. thanks alot
@VGHOST008
@VGHOST008 9 ай бұрын
Weird. I do exactly as you showed, but ControlNet doesn't give me the depth result 1-to-1. It often fails to replicated the poses and the outputs aren't great. Am I missing something? When exactly should I set preprocessor and model to 'none' and why aren't you doing that when generating images? Any help appreciated.
@VGHOST008
@VGHOST008 9 ай бұрын
Updated: Openpose seems to do the trick. I have no idea why but depth produces extremely inaccurate and inconsistent results for me. Openpose - almost always does what is being referenced. Does depth work well and accurately only with anime models?
@contrarian8870
@contrarian8870 11 ай бұрын
Thanks for this. Is Midas the underlying depth algorithm or something else?
@CreatixAi
@CreatixAi 11 ай бұрын
Midas is one of the 4 depth preprocessors. It's selected by default to create a depth map that has the least amount of details but works best for most cases. 😊
@odessaodesa5307
@odessaodesa5307 8 ай бұрын
Thanks! Very useful !!! 🙏❤
@jccluaviz
@jccluaviz 9 ай бұрын
Thank you for that great work. It's really amazing. Do you know if this can be replicated in Fooocus ?
@chuckynorris616
@chuckynorris616 11 ай бұрын
how did you get the AI to do mouth clicking ,popping noises?
@chuckynorris616
@chuckynorris616 11 ай бұрын
link the LORA please
@luciusblackheart
@luciusblackheart 11 ай бұрын
thank you!
@elvira-irinareisinger6370
@elvira-irinareisinger6370 11 ай бұрын
As anybody else in fandoms without money I'm on yodayo and wonder WHEN exactly to use canny or depth. Sometimes i get nice result with canny and sometimes garbage. Only depth (but not every subroutine or processor is good?) gives me more good than trash results.
@blender_wiki
@blender_wiki 11 ай бұрын
Be careful which depth preprocessor you use because some implementations have bad VRAM memory leak in CUDA and you can have an impromptu system crash.
@AlexeyRodin.
@AlexeyRodin. 9 ай бұрын
I like your English pronunciation/
@CreatixAi
@CreatixAi 8 ай бұрын
That’s the first time I received a compliment on my English pronunciation. Thanks a bunch 🥰 It’s my second language, can you tell?)
@zahrajp2223
@zahrajp2223 11 ай бұрын
Plz what about fooocus plz try with it 🥹
10 AI Animation Tools You Won’t Believe are Free
16:02
Futurepedia
Рет қаралды 438 М.
Lamborghini vs Smoke 😱
00:38
Topper Guild
Рет қаралды 16 МЛН
Stable Diffusion ControlNet Explained | Control Net Examples
9:41
1littlecoder
Рет қаралды 30 М.
The 5 Stages of Learning Blender
3:14
Artin Azarnejad
Рет қаралды 249 М.
New SDXL ControlNet Models are Here!🤯 (Mind Blowing Results)
14:08
Stable Diffusion - UPSCALE для ПЕЧАТИ
20:54
XpucT
Рет қаралды 85 М.
how to always generate UNIQUE ai art | stable diffusion
13:31
CreatixAi
Рет қаралды 1,3 М.
WOW! NEW ControlNet feature DESTROYS competition!
9:08
Sebastian Kamph
Рет қаралды 380 М.