How to use ControlNet in your AI Art - Stable Diffusion Tutorial 2023

  Рет қаралды 210,499

Albert Bozesan

Albert Bozesan

Күн бұрын

Пікірлер: 458
@BritBox777
@BritBox777 Жыл бұрын
For those who don't know: If you're inputting an image into ControlNet you don't have to bother setting any of the canvas info. Canvas is ONLY for doodling inside the ControlNet window in the GUI.
@albertbozesan
@albertbozesan Жыл бұрын
Correct! My mistake.
@Cheesus_Chrisp
@Cheesus_Chrisp Жыл бұрын
Hey this was my first video I’ve seen from you. I just wanted to let you know I liked the pacing and your delivery of the subject matter. You’re doing great!
@albertbozesan
@albertbozesan Жыл бұрын
Thank you so much! 😄 I tried hard to get it right, glad that comes across.
@FaizalKuntz
@FaizalKuntz Жыл бұрын
AI art with human touching or filter is definitely an Art of it self
@NordwulfLive
@NordwulfLive Жыл бұрын
This is the first video of yours I have seen. While I have been gobbling up every Stable Diffusion video I can find, yours was one of the most detailed and complete walk through of a start to finish project. Subbed and will be watching more. Great job, keep them coming!
@albertbozesan
@albertbozesan Жыл бұрын
Awesome, thank you!
@brian10508
@brian10508 Жыл бұрын
Yes, this is what I think artists could use AI for. Not just single button txt to image, but a lot of editting and composition, light testing to get a better results by their artistic eyes. This is why I don't think artist will be replaced by AI.
@huymaivan8671
@huymaivan8671 Жыл бұрын
The sad thing is 90% of AI user now is "AI prompter" who called themselves "AI artist", those people with single button txt to image. And those people keep abuse normal artist because they think they 're supereme kind of artist than normal artist just because their one button action.
@gorkskoal9315
@gorkskoal9315 Жыл бұрын
and everyone needs to remember that it's got a ton of limits, and frankly things that gimp, paint by hand sketching and blah blah can do better and easier.
@jessedart9103
@jessedart9103 Жыл бұрын
Nice tutorial! One tip -- you don't need to change the "canvas" WxH settings in the ControlNet section. Those are only if you want to create a drawing canvas. The UI layout doesn't make that obvious, but they're not needed for open pose or any of the other processors. Cheers
@albertbozesan
@albertbozesan Жыл бұрын
Thank you! That really isn't obvious 😅
@pouyagorji3022
@pouyagorji3022 Жыл бұрын
Probably the most impressive AI generation tutorial I've seen on KZbin. I want to see more people using AI creatively as part of their artistic workflow than simple text to image generation. Thanks!
@albertbozesan
@albertbozesan Жыл бұрын
Thank you!
@Hey-Its-Retro
@Hey-Its-Retro Жыл бұрын
Thank you for another fabulous video! It's great that you don't just show how to install the things (like most of the 'others'), but actually seeing you use them and your workflow is pretty much unique in the KZbin A.I. 'scene'. Not many do this, and it's the bit I really enjoy - seeing the potential and possibilities demonstrated by yourself, right there in front of us... So thanks for sharing these valuable lessons with us! It's very much appreciated!
@albertbozesan
@albertbozesan Жыл бұрын
Thank you so much for the kind words! I'm glad you enjoy the vids 😄
@qcgeneral29
@qcgeneral29 Жыл бұрын
This is a really well made video. You did a great job explaining the whole process while editing out some of the extra fluff here and there.
@albertbozesan
@albertbozesan Жыл бұрын
Thank you! There was a *lot* of fluff to cut out 😉
@shockwave952
@shockwave952 Жыл бұрын
This is actually amazing! I've been putting off playing with ControlNet as it's seemed really daunting, but this has definitely motivated me to give it a try. I really love the way you present and break down your workflow. It gives me a lot of inspiration and ideas for my own projects! It's crazy how far things in this space have come in less than a year. It really doesn't seem like that long ago when blurry blobs that sort of resembled your prompt was as good as it got. I can't even begin to think what things will be like this time next year.
@albertbozesan
@albertbozesan Жыл бұрын
Thank you! I’m glad it’s motivating 😄 it honestly took me a little while to try, too. But it’s not as scary as it seems!
@jasonsmith8426
@jasonsmith8426 Жыл бұрын
This was so informative and just flat-out amazing!!!!! Thank you sooooo much
@thetruthserum2816
@thetruthserum2816 Жыл бұрын
Hmm, now I'm wanting to connect MIDI controllers and control surfaces so that I can mix the inputs on the fly. I see a MIDI control plugin for Stable Diffusion being a thing...
@DJVARAO
@DJVARAO Жыл бұрын
I like your video. You can wrap up the mood of the image by prompting cinematic terms, or by selecting a given tonality for the image. Or by calling a particular style vie textual inversion. Great work!
@ziongreen1725
@ziongreen1725 8 ай бұрын
Be sure that your automatic1111 is up to date. It wasn't working on my machine until I updated it. Took waaaaaaay to long to figure out.
@safwansya72
@safwansya72 Жыл бұрын
i hate how i just found out u can change the weight easily in the prompt... ive been manually typed the value and the syntax... omg
@eucharistenjoyer
@eucharistenjoyer Жыл бұрын
Come on man, that's too much work for most prompters out there. Jokes (?) aside, great video. There are lots of ways to integrate well known illustration techniques into AI creation and most artists are missing out a lot letting this all into the hands of (mostly) crypto bros.
@albertbozesan
@albertbozesan Жыл бұрын
Haha you got me in the first half! The comments I get sometimes, man… Totally. It’s almost fully controllable now, artists need to get on this. Many professionals already have, I wish they were more public about it.
@FearfulEntertainment
@FearfulEntertainment Жыл бұрын
Took a group photo and generated it over each other a few dozen times to make an epic zombie invasion.. GG thanks bro
@BananaHammyForYou
@BananaHammyForYou Жыл бұрын
Thank god for this video, I thought I could just wing it...
@lamstar70
@lamstar70 Жыл бұрын
Many Thanks for Demostration from Hong Kong.~~
@sholaide
@sholaide 3 ай бұрын
Just see this now. It's been a YEAR now! I wonder how the tech has SD tech has changed. Is any of this information still valid/useful?
@albertbozesan
@albertbozesan 2 ай бұрын
This one is still decent! But check out my newer vid where I also go into ControlNet: kzbin.info/www/bejne/pHPTdWCIl8yfhtk
@Noc-z2b
@Noc-z2b Жыл бұрын
Hello! I love the video. Definitely got me in the mood to try AI again. But I have question. When I go to controlNet models to get the canny and openpose. I see two different ones. Canny.pth and Canny.yamI. Openpose.pht and Openpose.yami. just wondering which one I should pick.
@albertbozesan
@albertbozesan Жыл бұрын
Pth is the actual model, yaml is just the config file. Place both in your models folder. The yaml should already come preinstalled, though.
@2Bakich
@2Bakich Жыл бұрын
A very helpful guide, thanks!
@thefriendlyaspie7984
@thefriendlyaspie7984 Жыл бұрын
you should had tried in HED instead of canny
@Doughy_in_the_Middle
@Doughy_in_the_Middle Жыл бұрын
So, I stepped away from AI art to work on other projects about a week and a half before ControlNet came out, so I'm just catching up. One thing I noticed is that every demo I watch has the preview of the edge detection or post detection (as per the ControlNet model) shown as the output. Mine does NOT do that whether I generate batches or individual images. Is there a setting that's turned off?
@albertbozesan
@albertbozesan Жыл бұрын
It shows it by default for me, unless the “Enabled” checkbox in ControlNet isn’t checked. Are you sure it’s on and functioning?
@Snaaps
@Snaaps Жыл бұрын
Check up the addons for Photoshop , It's dope
@rickb.1751
@rickb.1751 11 ай бұрын
I've been trying to understand how people are getting so much detail into their images without them having that AI generated look. This video inadvertently showed me a piece of that puzzle. Good stuff. Subed, liked and saved to my SD folder. I'll be coming back to this video as there is a lot to unpack.
@albertbozesan
@albertbozesan 10 ай бұрын
Thank you!
@Necksteppa77
@Necksteppa77 Жыл бұрын
Dude, Ive been trying to find videos of people clearly explaining the process. This is great, I learned A LOT from this video.
@hajain5990
@hajain5990 11 ай бұрын
I am facing this problem after clicking generate bottom... "AttributeError: 'NoneType ' object has no attribute 'mode'----I do not understand How to solve this problem.
@albertbozesan
@albertbozesan 11 ай бұрын
Please search for that issue on Reddit, every problem has already been solved by someone there 😄
@hajain5990
@hajain5990 11 ай бұрын
thanks@@albertbozesan
@kaimaiiti
@kaimaiiti Жыл бұрын
this entire process seems akin to an old master with a studio of painters telling them to paint something and then saying "no not like that! like this!" scribbling over their work, and saying "doing it again!"
@albertbozesan
@albertbozesan Жыл бұрын
Haha yeah except the master can’t paint (yet) 😅 he’ll “know it when he sees it”
@PuckDudesHockey
@PuckDudesHockey Жыл бұрын
Excellent video, very helpful. Thank you!
@albertbozesan
@albertbozesan Жыл бұрын
Glad I could help! Thanks 😄
@pladselsker8340
@pladselsker8340 Жыл бұрын
The only part that I personally dislike about this workflow is having to rely on external images that you search for. What if you can't find THE image that has the perfect contours? Yes, you can combine multiple, but you still have to find many images with the parts that you want. I personally prefer to try my own hand and crafting the lineart on my own, but it usually gives better results if you use existing images like you showed, from my experience. My hope is that eventually, over time, being able to craft stuff for controlNet by hand will be faster and more accurate to the original idea I had in my head than relying on external sources. Great overview of how to use controlNet overall! I'm sure a lot of people will find this helpful.
@pladselsker8340
@pladselsker8340 Жыл бұрын
By the way, controlNet is for controlling the composition of the image, but we don't have anything as efficient as controlNet that lets you control the details, yet. Of course, you could use loras and stuff, but this becomes unviable if you need a ton of them. You can't train a lora everytime you want consistent details, it just takes too long, let alone the fact that you might not even have enough training examples if you have original characters that you want to make consistently (like me). There's a paper about a new method that came out last week, and it's called "elite". Elite acts almost exactly like textual inversion, but takes 0.05 SECONDS to learn an image (according to their paper). Their code is still not released on their github, but I think this is the last key to consistent and fast control over image generation. When they release it, it's gonna go as wild as when controlNet got released, if not wilder. Here's the paper if you're interested: arxiv.org/pdf/2302.13848.pdf
@albertbozesan
@albertbozesan Жыл бұрын
Thank you! I’ll take a look at that.
@talessin
@talessin Жыл бұрын
fantastic job! I learned much about controlnet settings! thx
@albertbozesan
@albertbozesan Жыл бұрын
Glad it helped! Thanks.
@davewaldmancreative
@davewaldmancreative Жыл бұрын
Super. thanks so much!
@ДенисКО-о8ш
@ДенисКО-о8ш Жыл бұрын
thank you for a great lesson, but the canvas and its size are needed for direct drawing and not for generation, or am I wrong?
@albertbozesan
@albertbozesan Жыл бұрын
That’s correct! My mistake.
@MaBucket
@MaBucket Жыл бұрын
Fantastic video, thanks :)
@albertbozesan
@albertbozesan Жыл бұрын
Glad you liked it!
@zimnelredoran9985
@zimnelredoran9985 Жыл бұрын
Thankyou for showing the whole process, nicely explained! Loved the final output and how you came to get it : ))
@albertbozesan
@albertbozesan Жыл бұрын
Thank you! Glad you enjoyed it :))
@Firespark81
@Firespark81 Жыл бұрын
Thanks for the guide!
@albertbozesan
@albertbozesan Жыл бұрын
You bet! Thanks for watching 😄
@judgeworks3687
@judgeworks3687 Жыл бұрын
A great walk thru of the process. Thanks for being so clear. Makes it easy for a beginner (like me!) to follow along. Have you made any animation from images? I’d be curious to know what your process is for making animation from the image.
@albertbozesan
@albertbozesan Жыл бұрын
Thanks! I haven’t tried frame by frame animation yet, it looks pretty shaky without a ton of work (like what the channel Corridor Digital did last week!).
@TheSkyliz
@TheSkyliz Жыл бұрын
Great video. Real quick, how do you instantly put double brackets around a word and start increasing the weight? My workflow is slow and i manually insert one bracket, then the other, then colon and then the weighting i want. You do it incredibly fast, how?
@albertbozesan
@albertbozesan Жыл бұрын
Select the word, then CTRL Arrow Keys up and down :)
@TheSkyliz
@TheSkyliz Жыл бұрын
@@albertbozesan Thank you so much :) These small improvements are invaluable when it comes to making the workflow more seamless and fun
@SatishRahi
@SatishRahi Жыл бұрын
Albert, things seem to have changed for installing Control Net. I get a run time error when using any model except canny (which works even without selecting a mode on the right). Where are the errors captured? So, I can take a deep dive into the ERROR. Do you keep your installation current , I believe you may run into same issue if you were to do a fresh install, like I am having with Control Net.
@albertbozesan
@albertbozesan Жыл бұрын
Hmm. There has been an update to ControlNet 1.1, that’s true. It still shouldn’t work with no model selected, that’s strange. I’ll check it out.
@SatishRahi
@SatishRahi Жыл бұрын
@@albertbozesan Thanks. Control Net works for me on Colab , not on my desktop where I just installed everything following your instructions. Also, if you could comment on difference between new "control_v11p_sd15_openpose.pth" and older model "control_openpose_fp16". This is probably what is causing the issue
@albertbozesan
@albertbozesan Жыл бұрын
@@SatishRahi the new one is just much better. Make sure you have the updated auto1111
@overpope3510
@overpope3510 Жыл бұрын
The worst part of this decade might be that script kiddies and techno bros call themselves artists
@albertbozesan
@albertbozesan Жыл бұрын
I’m literally a bestselling writer, AI-free.
@pixelpuppy
@pixelpuppy Жыл бұрын
this video was super informative, and very easy to understand! I learned a lot! Especially how I can integrate AI as a tool for my own art, rather than just generating images. Liked and subscribed! Thank you!!
@albertbozesan
@albertbozesan Жыл бұрын
Glad it was helpful! Thank you :)
@tambayannihulyo6863
@tambayannihulyo6863 Жыл бұрын
Nice information ❤
@concretec0w
@concretec0w Жыл бұрын
Nice work - great to see someone using this stuff like an artist would instead of a nerd!!! *(i'm a nerd btw lol)
@mmagrin
@mmagrin Жыл бұрын
Man, I'm always learning new things from your videos. Thanks a lot! :)
@wurzelbert84wucher5
@wurzelbert84wucher5 Жыл бұрын
Slowly it's going into the right direction. As an digital painter I don't want pure text to image creation, I want to have proper control!
@footage3914
@footage3914 Жыл бұрын
Affinity user! Yes!
@vincentng7322
@vincentng7322 Жыл бұрын
thank you for share
@madlookzvfx
@madlookzvfx Жыл бұрын
Mind Blown!
@Hazzel31337
@Hazzel31337 Жыл бұрын
really good and advanced tutorial, everything is clean, short and easy to understand commented ! verry nice
@sandeepm809
@sandeepm809 Жыл бұрын
How do you increase the weight of prompt ??
@albertbozesan
@albertbozesan Жыл бұрын
In the WebUI, select your word, hold control and use your up/down arrow keys to increase or decrease the weight :)
@WOMrecords
@WOMrecords Жыл бұрын
Thank you so much! Great tutorial!
@albertbozesan
@albertbozesan Жыл бұрын
Thank you!!
@ttvadunkey
@ttvadunkey Жыл бұрын
thank you for taking the time to make this amazing tutorial. I hope you and everyone reading has a wonderful day!
@albertbozesan
@albertbozesan Жыл бұрын
Thank you so much! I’m glad you got value from it :) have a great day, too!
@VincentWuAnimation
@VincentWuAnimation Жыл бұрын
Great, we need more stable diffusion video
@albertbozesan
@albertbozesan Жыл бұрын
More to come! The next one releases on Sunday.
@pjgalbraith
@pjgalbraith Жыл бұрын
Great video as always! ControlNet is awesome and so much fun.
@albertbozesan
@albertbozesan Жыл бұрын
Thank you! Indeed, it’s really the next level of this whole thing.
@aegisgfx
@aegisgfx Жыл бұрын
@@albertbozesan what we need now is an add-on that give us coherence from frame to frame, It's pretty much impossible to tell a comic book type of story if the costume and the face change even slightly between panels.
@arnoldsoko
@arnoldsoko Жыл бұрын
🙌🏿🙌🏿
@orirune3079
@orirune3079 Жыл бұрын
This is so cool. I now need to learn how to use this photobash thing! Also I can't say how much I appreciate that this image is actually a really cool and interesting one, and NOT another anime girl.
@albertbozesan
@albertbozesan Жыл бұрын
Haha thanks. I never understand why anime girls are the benchmark when these models are literally trained to be good at them. Show me something difficult to make!
@orirune3079
@orirune3079 Жыл бұрын
@@albertbozesan Also they're just so boring! You can only see so many anime catgirls before it's like okay, I get it, it makes catgirls! Give me something cool! Like a tentacle monster coming through a portal and killing scientists!!
@longhoang307
@longhoang307 Жыл бұрын
this is awesome
@erenuzuntas8167
@erenuzuntas8167 Жыл бұрын
🤩
@yiluwididreaming6732
@yiluwididreaming6732 Жыл бұрын
Did not know this......well explained, good example with 'foto bashing'. Now I get the terminology. Subbed 🥰
@albertbozesan
@albertbozesan Жыл бұрын
Awesome! Thank you!
@JohnVanderbeck
@JohnVanderbeck Жыл бұрын
Can I ask what is used here to surface the VAE option right to the top of the generation tabs rather than being buried in settings?
@albertbozesan
@albertbozesan Жыл бұрын
I’ve gotten this Q a few times, and I honestly can’t explain it properly :/ I thought I’d just done that via the settings. It’s not an extension, anyways.
@JohnVanderbeck
@JohnVanderbeck Жыл бұрын
@@albertbozesan So after digging through the code I found it. It is indeed a setting but a rather hidden "power" setting. In Settings->User Interface there is a textbox named "Quicksettings List" which by default has the value "sd_model_checkpoint". Change it to be "sd_model_checkpoint,sd_vae". Apparently this lets you add any settings module into a quick bar you just need to know the name of the settings module in code.
@destiny77021
@destiny77021 Жыл бұрын
@@JohnVanderbeck Many thx !!
@dougmaisner
@dougmaisner Жыл бұрын
great stuff!
@strangelaw6384
@strangelaw6384 11 ай бұрын
The blue suit man is too blue and front-lit, which makes me think that had you added a brightness layer of a color gradient in photoshop and do a low denoise (like 0.3) inpainting run, the result would look so much better (like you said, more time=better quality). Before controlnet, img2img is absolutely horrible at fixing/reinventing accurate lighting and colors, so you will have to add them manually, which can be very difficult to do accurately for those who don't have a background in visual arts. You can still do this badly/roughly and get a good result as long as you use a high denoise, but that will lead to lower details and semantic abberations. But now, controlnet pins down the compositional details quite effectively, which lets you use high denoise + extra noise on an image that is only roughly processed for an accurate lighting condition, without all the issues that previously came with img2img.
@chardy7071
@chardy7071 Жыл бұрын
I like this kind of video on how to control the looks, Now its becoming a tool that artist can use.
@ProdByGhost
@ProdByGhost Жыл бұрын
yeah controlnet is next lvl, gooodstuff
@ohheyvoid
@ohheyvoid Жыл бұрын
AWESOME! Very helpful and fun. Thanks for sharing all these videos Albert. :D
@albertbozesan
@albertbozesan Жыл бұрын
Glad you enjoyed it! More coming soon :)
@michaelmauder
@michaelmauder Жыл бұрын
Albert's back and with a vengeance!
@DecentralisedGames
@DecentralisedGames Жыл бұрын
You're a unit dude, be well, great content.
@albertbozesan
@albertbozesan Жыл бұрын
Thank you!
@online-tabletop
@online-tabletop Жыл бұрын
Excelent video
@gabrieleingrassia2703
@gabrieleingrassia2703 Жыл бұрын
bravo
@willhart2188
@willhart2188 Жыл бұрын
Thank you. This is helpful.
@albertbozesan
@albertbozesan Жыл бұрын
Glad it was helpful! You’re very welcome.
@Putt-Putt
@Putt-Putt Жыл бұрын
Whoa this is cool. How do I get started with Stable diffusion?
@albertbozesan
@albertbozesan Жыл бұрын
First step would be installing the auto1111 WebUI :) guide is in the vid description. Most of my earlier videos are more beginner friendly, I will try to make a newer version of those soon, though. A lot of things have changed for the better lately.
@Putt-Putt
@Putt-Putt Жыл бұрын
@@albertbozesan alright man. Looking forward to it and starting my journey into AI Art 😌
@RenewedRS
@RenewedRS Жыл бұрын
I needed this, been struggling with all the different CN models
@albertbozesan
@albertbozesan Жыл бұрын
I haven’t even begun to figure out the best use for each one, but I thought might as well make a guide to at least two :)
@darrellbroady3850
@darrellbroady3850 Жыл бұрын
Every time I try to instal control net it installs ControlNet M2M which appears to be something different. I am not getting the options in the video. Anybody know what Im doing wrong?
@cameriqueTV
@cameriqueTV Жыл бұрын
Very interesting! As a point of reference as I shop for a GPU/CPU upgrade, what is your rig? Seems fast enough. Thanks.
@albertbozesan
@albertbozesan Жыл бұрын
RTX 2070 Super 👍 it’s decent. But be aware that I timelapse through the image generation! It’s about 10x slower than it looks here.
@BrianLife
@BrianLife 3 ай бұрын
Very cool. Thanks
@albertbozesan
@albertbozesan 2 ай бұрын
Glad you liked it!
@itanrandel4552
@itanrandel4552 Жыл бұрын
How do I activate the canvas boxes?
@albertbozesan
@albertbozesan 11 ай бұрын
I don’t think the UI offers cropping anymore, because direct inpainting has gotten so much better.
@Moeshi
@Moeshi 6 ай бұрын
thanks for the explanation
@albertbozesan
@albertbozesan 5 ай бұрын
You're welcome!
@ibrahimhakkiuslu
@ibrahimhakkiuslu Жыл бұрын
Albert this is so cool :) thank you for this beautiful tutorial. I am sharing this with my students and coworkers. thank you :)
@albertbozesan
@albertbozesan Жыл бұрын
Awesome! Thank you!
@pretzelsaladito
@pretzelsaladito Жыл бұрын
thanks for your detailed explanation!
@AuraeRecords
@AuraeRecords 10 ай бұрын
mine doesn't work... when i check the preview button there is no image. it just says drag and drop an image
@CynicalWilson
@CynicalWilson Жыл бұрын
OMG, thanks so much ! This made a bunch of lights turn on for me !! loved the video
@albertbozesan
@albertbozesan Жыл бұрын
Glad it helped! Thank you!
@Tferdz
@Tferdz Жыл бұрын
Canvas width and height is only useful if you want to scribble inside the canvas. Otherwise, you can just ignore it.
@albertbozesan
@albertbozesan Жыл бұрын
Correct! My mistake, thank you.
@v-for-victory
@v-for-victory 7 ай бұрын
Best video concerning controlnet. Very helpful. Thanks a lot.
@albertbozesan
@albertbozesan 7 ай бұрын
Thank you!
@a.d.r.5854
@a.d.r.5854 Жыл бұрын
Very nice tutorial, very well explained and awesome results! Thanks man
@albertbozesan
@albertbozesan Жыл бұрын
Thank you!
@chrisdixonstudios
@chrisdixonstudios Жыл бұрын
Lots of great technique and well narated to guide viewer why. Now i just have to stop and follow slowly and a week later i should be able to do what you did in an hour ☺. Thanks for sharing your workflow.
@albertbozesan
@albertbozesan Жыл бұрын
Thank you! Enjoy 😄
@lilowhitney8614
@lilowhitney8614 Жыл бұрын
Nice. I've been waiting for this.
@parkercoleman8078
@parkercoleman8078 Жыл бұрын
hey! saw you liked my comment, while ive got you're attention, you know any way to utilize multiple GPUs? ive got 8 so itd be nice lmao
@albertbozesan
@albertbozesan Жыл бұрын
Sorry, I don’t know. But I always recommend Reddit, the hivemind is smarter than me.
@coda514
@coda514 Жыл бұрын
Awesome workflow. Very informative.
@saimon1987
@saimon1987 Жыл бұрын
AI dont do art. Do not call it that.
@albertbozesan
@albertbozesan Жыл бұрын
I can’t believe “gatekeeping art” has become a real thing in 2023.
@chariots8x230
@chariots8x230 Жыл бұрын
It’s pretty cool how you used photobashing to shape the image. Now if you can do the same thing, but somehow input your custom characters in there, that would be great. I want to create scenes like this, but it’s important to me that I can use my own original characters. So, I wish there was a way I could tag the models that I’m using for the poses with the names of my custom characters, and the AI would insert my custom characters in there, instead of just some random characters.
@albertbozesan
@albertbozesan Жыл бұрын
Yeah! I think a way to do this would be to train a model on your custom character, create your scene with a normal model and generic description of your character and then inpaint the details with your custom model?
@chariots8x230
@chariots8x230 Жыл бұрын
@@albertbozesan It seems complicated, since I haven’t seen any tutorials so far on how to create a whole scene with custom characters in it. I want to use more than one character in the scene, so I need to guide the AI in order for it to be able to distinguish which of the poses belongs to each character.
@albertbozesan
@albertbozesan Жыл бұрын
@@chariots8x230 character consistency is for sure one of the big challenges in ai art right now 😄
@ShawnFumo
@ShawnFumo Жыл бұрын
I haven't tried it yet, but apparently training a LoRA is pretty quick to do with not a ton of images needed and can do one for each character. Then could select one and use it in the prompt during the inpainting (has a format like ). There is also a way to do it without inpainting now, using the Latent Couple and Composable Lora extensions. That lets you split an image into sections and split the prompt with "AND" keyword and then specify a diff LoRA for each part of the prompt. And now I think someone just released a MultiDiffusion Region Control extension less than a day ago. I haven't used that yet, but it lets you paint an image with different colors and then for each color you can specify a different prompt (kind of like the segmentation feature of ControlNet but more custom). That could potentially let you start the process easier than with photobashing, since you could just paint out the regions of the image and prompt them separately to get started. Also, it wasn't mentioned in this video but you can use something like the openpose editor extension to create the pose skeletons from scratch or load them from an image and then manually adjust them in it (and now there is a blender rig that lets you also pose fingers and toes and export the skeleton and also canny+depth maps for the fingers/feet). So there is quite a lot of flexibility out there for using a combination of photobashing, regional prompts, inferred or manually created skeletons, etc.
@JohnVanderbeck
@JohnVanderbeck Жыл бұрын
@@albertbozesan I've been trying to do exactly this but while I feel I have made great strides in using all the latest tools to make images, the training side is still extremely confusing and black box to me.
@goldenlotus8968
@goldenlotus8968 6 күн бұрын
Very useful!
@albertbozesan
@albertbozesan 5 күн бұрын
Glad you think so!
@BritBox777
@BritBox777 Жыл бұрын
Another tip for this process, instead of taking your smallest image and using the extras upscaler (which doesn't change many details and just adjusts the pixels to fit a bigger resolution) take your original image and send it to IMG2IMG with a low diffusion strength at the larger size. The upscale you'll get will be leagues better and require far less minor inpainting jobs. That's how I handle it at least.
@albertbozesan
@albertbozesan Жыл бұрын
I can’t generate at much larger sizes with my GPU, but this could work for others 😄 EDIT: Nevermind, this works really well!
@BritBox777
@BritBox777 Жыл бұрын
@@albertbozesan Ah, but you can! My GPU is a crappy 4GB, trust me. Find out the max res you can create in IMG to IMG and get it as close to your desired size as possible before moving on to Extra upscale. For me that's just 512x512 > 800x800 but it makes a massivel difference. :)
@albertbozesan
@albertbozesan Жыл бұрын
@@BritBox777 I've tried this now and have to eat my words - excellent technique, thank you for sharing it with me! It works just fine on my GPU and is a great step before upscaling. There can be some "roughness" introduced but the added detail more than makes up for it.
@BritBox777
@BritBox777 Жыл бұрын
@@albertbozesan Glad to hear it. If you find any improvements do let us know.
@fmt2586
@fmt2586 Жыл бұрын
You should really make more videos, I really love these and appreciate the videos .
@albertbozesan
@albertbozesan Жыл бұрын
Thank you! More to come.
@purpleyamjam5172
@purpleyamjam5172 Жыл бұрын
Great video! You were very helpful explaining this.
@albertbozesan
@albertbozesan Жыл бұрын
Thank you!
@apotheases
@apotheases Жыл бұрын
Amazing work! Thank you.
@winter5945
@winter5945 Жыл бұрын
Amazing tutorial, everything explained clearly and concisely.
@albertbozesan
@albertbozesan Жыл бұрын
Glad it was helpful! Thank you.
@beecee793
@beecee793 Жыл бұрын
Can you explain the meaning of any of the settings? For example - preprocessor vs model - you always just set them to the same thing, but what if you didn't? What is the poreprocessor for and what cases might you set them to be different models?
@albertbozesan
@albertbozesan Жыл бұрын
Hard to explain via text, but I will try. The models take a very specific type of input. You cannot enter just a png into a canny model for it to work, for example. The canny model requires black and white outlines. That’s what the preprocessor creates - it recognizes edges in your image and creates an image of so-called “Canny Edges”. That’s a format the model can then understand to influence your image. The same goes for OpenPose. The preprocessor is the piece that actually “sees” the poses. The model then uses them to influence the new generations. If, for some reason, you already have the poses in that specific multicolor format you see in the video, you don’t need the preprocessor. For depth it’s the same and probably easiest to understand. If you already have a depth map, you can skip the preprocessor - see my isometric game assets tutorial for a good example of that where I don’t use the preprocessor. Or check out my photo restoration vid where I use canny another more precise way.
@MrRAYWJFAN
@MrRAYWJFAN Жыл бұрын
Thank you brother. Does this work with artroom ?
@albertbozesan
@albertbozesan Жыл бұрын
I don’t think so. Artroom is cool for starting out, but I strongly recommend switching to auto1111 as soon as you’re comfortable.
@MrRAYWJFAN
@MrRAYWJFAN Жыл бұрын
@@albertbozesan thank you sir. I shall follow your examples and share with you any progress I make. If it's ok with you I'd like to also share any speed bumps I face as I move forward.
@cerokai8408
@cerokai8408 Жыл бұрын
Very eloquently explained and an impressive tutorial. You are great at making these videos and explaining the mechanics of each step. This will help me ten fold with my own ai art! Thank you very much.
@albertbozesan
@albertbozesan Жыл бұрын
Thank you so much!
Stable Diffusion - Sketch
31:39
XpucT
Рет қаралды 49 М.
🤔 Ok, but what IS ControlNet?
25:31
koiboi
Рет қаралды 44 М.
From Small To Giant Pop Corn #katebrush #funny #shorts
00:17
Kate Brush
Рет қаралды 71 МЛН
小天使和小丑太会演了!#小丑#天使#家庭#搞笑
00:25
家庭搞笑日记
Рет қаралды 33 МЛН
The selfish The Joker was taught a lesson by Officer Rabbit. #funny #supersiblings
00:12
How to use ControlNet. ControlNet for Stable Diffusion Tutorial.
16:23
Sebastian Kamph
Рет қаралды 353 М.
Best Practice Workflow for Automatic 1111 - Stable Diffusion
8:00
AIKnowledge2Go
Рет қаралды 237 М.
ControlNet Union for SDXL - one Model for everything
4:45
Olivio Sarikas
Рет қаралды 49 М.
How to use Stable Diffusion. Automatic1111 Tutorial
27:10
Sebastian Kamph
Рет қаралды 329 М.
Master AI image generation - ComfyUI full tutorial 2024
1:18:44
AI Search
Рет қаралды 53 М.
Runway Gen-3 Insane Image to Video Pro Guide
15:32
Atomic Gains
Рет қаралды 17 М.
From Small To Giant Pop Corn #katebrush #funny #shorts
00:17
Kate Brush
Рет қаралды 71 МЛН