HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
@charleskingmaker111 Жыл бұрын
Straight to the point, crystal clear explanation, showed the complete workflow and even provided the prompt. 10/10 tutorial right there ❤️
@damarh Жыл бұрын
man you got a dolar for a hamberger ?
@Aitrepreneur Жыл бұрын
Thanks for the tip :)
@Mr.Reality Жыл бұрын
the fk is that currency
@Deagzzz Жыл бұрын
@@Mr.Reality indian rupees, that's like $1.2
@user0K Жыл бұрын
yep, it is like YT from 2008 haha. Or a bit longer tiktok 😅 YT should stop pushing 10 minutes videos or they would loose to tiktok hard.
@maxington26 Жыл бұрын
Thanks so much for these continued straight-to-the-point tutorials, my AI overlord. These days I look forward to your videos more than any others on YT, I reckon
@Aitrepreneur Жыл бұрын
Glad to help!
@MonkeChillVibes Жыл бұрын
Same here ngl
@keshav_p Жыл бұрын
Same here. These save so much time ! And the best of info!
@Swearsan Жыл бұрын
Super short and to the point, with workflow and prompt, didn't hold hands installing modules and the rest. Love this guide, thank you for making it.
@androsforever500 Жыл бұрын
I like these shorter style videos,very coincise and easier to learn one thing at a time! When it's too long I feel overwhelmed
@TheAiConqueror Жыл бұрын
brilliant! 🙌
@Aitrepreneur Жыл бұрын
Thank you 🙌
@jessedart9103 Жыл бұрын
There's more useful information in this ~3 minute video than any other I've seen. Phenomenal work.
@DJVARAO Жыл бұрын
Controlnet is by far the best tool so far.
@danielrasmussen4862 Жыл бұрын
@Aitrepreneur, you continue to amaze me with your innovative stable diffusion tools, every video. Continue this behaviour and im sure your channel will reach it's potential very soon, expecitally with all the new people learning about artifical image generation :D thanks alot!!!
@Aitrepreneur Жыл бұрын
Thanks, will do!
@alphacat4927 Жыл бұрын
Who wants to bet Olivio Sarikas will make a video on this tomorrow?
@Aitrepreneur Жыл бұрын
Shhhh don't tell him :D
@bryan98pa Жыл бұрын
Hahahaha
@76abbath Жыл бұрын
I appreciate the work both of you!
@fernandosouuza3881 Жыл бұрын
And? Do you want intellectual property for tutorials?
@VepianV5Azaraz Жыл бұрын
I'm more intrested when Russian youtubers make it
@Danny2k34 Жыл бұрын
Such exciting times! I'm waiting for the ability to retain a character through generations which I feel like we are almost there with everything coming out. That'll be a huge game changer if you could color code different characters and the AI knows who is who and can generate them with ease based on the color coding. We just need that wizard to come out of the shadows!
@autonomousreviews2521 Жыл бұрын
Staying right at the front of the pack :) Thank you for sharing!
@Aitrepreneur Жыл бұрын
Thanks for watching!
@Icewind007 Жыл бұрын
Wow... I was literally just trying to make this myself with the other control net stuff. This is awesome!
@keshav_p Жыл бұрын
Thankyou for the quick and prompt tutorial!
@Aitrepreneur Жыл бұрын
Glad it was helpful!
@gara8142 Жыл бұрын
This is great I only hope we had a way to tell the open control net model which way the character is facing. I usually get them on either the back or the front when I want he opposite effect
@nothingrhymeswithferg3744 Жыл бұрын
i also have this problem! let me know if you find a solution :)
@gloudsdu Жыл бұрын
absolutely terrific for game dev
@real23lions Жыл бұрын
This is crazy. I'm speechless
@visualdestination Жыл бұрын
Thanks for being the go to channel for SD. Can you make more videos on training models that aren't character based? Objects or art styles?
@ShellSmashed Жыл бұрын
Controlnet is amazing, but can you cover it for deforum since the update came out yesterday. From what I figured, it's used as batch2img,
@loszhor Жыл бұрын
Thank you for the information.
@IlRincreTeam Жыл бұрын
Great, great job! Thank you again mr K for the good content!
@Aitrepreneur Жыл бұрын
My pleasure!
@Jagent Жыл бұрын
Okay. That's fantastic. I can imagine using that to generate all kinds of sprite sheets for game dev.
@pladselsker8340 Жыл бұрын
I think the last problem that needs to be tackled in order to have efficient and easily controllable image generations is transfering details from one image to another. After that, no additional techniques will really be needed, just improvements on the previous techniques. You can view the content of an image on a spectrum, ranging from image composition to details and style. In-between, you have the colors, and the clothes characters are wearing, stuff like that. In essence, this can be described with an embedding, sure. With such a technique, you could specify an image, and the model would use only the details, and not the pose, and use these details to denoise the other details. With such a technique, you wouldn't even need loras anymore. You'd just need a character sheet, controlNet, maybe a prompt for specifying further details, and that's it.
@takeuchi5760 Жыл бұрын
For that to happen the models would have to know what clothes are, it might seem weird but I don't think it knows what clothes are, it literally just gachas its way from random noise to something it has seen before, for it to have a coherent enough understanding of the individual aspects of the image to be able transfer it from one to another with any appreciable accuracy would not only be complex, but I imagine not as effective as giving it enough data to work on so it knows what it's supposed to denoise to, as we do in normal training. But I wouldn't be surprised if some madlads actually figured out how to do it, given how quickly this is advancing.
@ai_vids Жыл бұрын
another great video!
@Aitrepreneur Жыл бұрын
Glad you enjoyed it!
@pankajroy5124 Жыл бұрын
Thanks a lot!!! Please keep uploading such brilliant tutorials.
@AdenMackdaddy Жыл бұрын
this is awesome! thank you! You have a new Sub
@aipamagica1 Жыл бұрын
Excellent vid... many thanks to you and to lekima for this. I'm wondering if you know of any place out there that explains the different colored sticks? I'm looking at the one for just the head. it sould seem to me that the blue part is the front of the face, and the opposite colors of the arms and legs are denoting left from right?
@alberti1122 Жыл бұрын
Thank you, learned new tricks!
@plagiats Жыл бұрын
That's incredibly useful!
@andresz1606 Жыл бұрын
Could this LoRA be used in the img2img tab for existing characters created or edited beforehand?
@helmutroll4773 Жыл бұрын
Perfect tut, thanks a lot! How I can continue from here on to get this character into different scenes, situations and even different but consistent stiles? I guess I have to train a model with those generated images from this tut. Would this work? Do you have an in depth-tut for that to? Thanks a lot!
@cihiris2206 Жыл бұрын
I’ve been waiting for this! Is it possible to get a 3D model then using photogrammetry and these outputs? Im testing this later.
@Aitrepreneur Жыл бұрын
You can definitely try!
@76abbath Жыл бұрын
Another great tuto! Thanks again!
@Aitrepreneur Жыл бұрын
Glad you liked it!
@flonixcorn Жыл бұрын
Very nice!
@JoaoPauloDev8 ай бұрын
Thank you for the video as always, why is so hard to find a img2img tutorial with open pose? I'm 2 days looking and just txt2img is showind to me, is because is more easy to manipulate? Real case scenario, we have a model that we want to change a pose, very difficult to find, all that I passed through doesnt work for me.... sad!
@vi6ddarkking Жыл бұрын
Neet trick but would it be possible to create a true character sheet. That we can then feed back into Stable Diffusion and use that to create consistent images of that character in any pose.
@Aitrepreneur Жыл бұрын
Well yes actually ;)
@MaxKrovenOfficial Жыл бұрын
@@Aitrepreneur Could you show us a way how to re-use this same character in other poses and situations just using default prompts? Or is the only way to do this creating an embedding or a Lora model?
@gustavomachado6113 Жыл бұрын
This is great! I just cant find where to download the inpaint model, didnt know there was a dedicated model for inpainting refinement.. thanks!
@PhillipBuck Жыл бұрын
What are the little emojis that show up in your prompts fields on the bottom right?
@Aitrepreneur Жыл бұрын
It's grammarly
@jadenpolto1791 Жыл бұрын
I absolutely understand that you must be fed up with dreambooth videos, but this is one of the most interesting feature ever, and has never been covered in real depth. So I would ask you to consider, as soon as you think that A1111 extension is a bit more stable, would you consider making a "definitive" training guide? thank you!
@VahnAeris Жыл бұрын
hey thanks, where can I find the model for inpaint in the video, can't find on civitai.
@OriBengal Жыл бұрын
You try this with Img2img? Or perhaps with controlnet somehow using an existing image of the character? Say for example I want to make a character sheet of K, our AI Overlord... and I train a model, and I have a fantastic render that's got the style and lighting that I want... and now I want him facing different ways....
@ryanjames4233 Жыл бұрын
Quality content , thank you
@aaronhhill Жыл бұрын
Wonderful! Thank you for this!
@Aitrepreneur Жыл бұрын
You bet!
@Oleksandr-Nikolaev Жыл бұрын
How to achieve the same style transfer effect as used in Artbreeder?
@muneebmuhammed3237 Жыл бұрын
hi, how to generate image in image2image by keeping the background and interior of a room and only want to change the objects inside the room.
@IskanderTheConqueror Жыл бұрын
What if you have a character already designed? Let's say you created a character front pose on midjourney and want to create a turnaround of the character with the same design/clothes, does this work? It would be neat to see if it's possible.
@lorenzoverardo6576 Жыл бұрын
Really useful video, thank you! Do you think that can be possible to generate different perspectives of other thinks, like a bed or a library for example?
@muuuuuud Жыл бұрын
Awesome info. Thanks as always, you rock! ^-^
@Aitrepreneur Жыл бұрын
Happy to help!
@imvengeance5077 Жыл бұрын
I need help I just installed the stable but img2img does not generate images can someone help me, I have the automatic1111 version.
@locomotionchannel Жыл бұрын
Howcome I don't see the options of the inpainting (only masked) (mask mode) is it because of the inpainting modef;??
@user-xe2ek1td1x Жыл бұрын
I wonder if you could use this to render 3D characters
@lefourbe5596 Жыл бұрын
yes ! yes you can
@ractorstudios Жыл бұрын
Would this work with something like an object (spaceship) ?
@aneebartist7207 Жыл бұрын
Can I turnaround my own created character which I did in photoshop without losing any detail ?
@DangerSideburns Жыл бұрын
I'm a 3d artist that's been using stable diffusion for concept art and working on my own original characters. Is there any way I can feed in a concept image of a character that I've made and get it to do a turnaround of that instead of just relying on the prompt for 100% of input?
@ethan-fel Жыл бұрын
Controlnet is a blast. And now you can use multiple controlnet model at the same time. I don't really care about new SD models (2.x...) anymore. Devs are far from getting out everything from the 1.5 models.
@vaneaph Жыл бұрын
Purrfect !
@basiccomponents Жыл бұрын
amazing, thank you!
Жыл бұрын
Amazing tutorial! The only problem is that isn't working for me, I tried your prompts and all the configuration but nothings seems to work, it looks that it's only using a part of the reference sheet, I always get like two poses even with your prompt. Can you help me?
@mariosavovski9806 Жыл бұрын
Is there any way to get more consistent characters between generations? for example i'm happy with this red headed blue skirt wearing character and i want to do more poses, any way to ensure the character's persistence?
@jimdelsol1941 Жыл бұрын
mindblown.
@otmanalami6621 Жыл бұрын
it must be used to create comic books, this a great idea i think for a friendly Saas software
@karaemn Жыл бұрын
Amazing. Thanks
@itsalwaysme123 Жыл бұрын
Do you think there is any benefit to using something like CharTurner in addition to ControlNet or does the introduction of ControlNet make the former completely redundant? Also, stellar content as always
@Aitrepreneur Жыл бұрын
I tried and no, you can use any model it doesn't matter :)
@knoopx Жыл бұрын
redundant
@BainesMkII Жыл бұрын
@@Aitrepreneur From my own controlnet experience, I have noticed you don't always get images that matches the supplied pose. Even the video example showed Stable Diffusion generate four back views (though two were at least slightly angled left and right), when the supplied poses were for back, left, front, and right views. Though what you can get from CharTurner can also be pretty random.
@pladselsker8340 Жыл бұрын
Yeah it does make it redundant. I'd say you even have more control with controlNet alone.
@BainesMkII Жыл бұрын
@@pladselsker8340 Wait, isn't ControlNet only for SD1.5 related models? So it shouldn't work with SD2.x models, while there is a version of CharTurner for those.
@aneebkhan6334 Жыл бұрын
can we do this with img2img?
@mustafahmed9101 Жыл бұрын
How do I rotate an existing character?
@CarpeUniversum11 ай бұрын
How do I add Open Pose to the model? When I try to select model all I have is "none"
@jucabalmacabro Жыл бұрын
hey, I gonna install stable diffusion in my pc. What video should I watch to install the versions 1.5 and 2?
@travissmith5994 Жыл бұрын
If you're using Automatic1111, it can run both SD 1.5 and SD 2 models.
@jorgenascimento218 Жыл бұрын
Great video. Question: if I use the same seed and prompt, jus changing the pose image, I could generate, for example, 10 different poses from the same character? In multiples generations preserving the consistent?
@pladselsker8340 Жыл бұрын
No, because the details of the character also depends on the position of the character in the image itself. The details come from the denoised noise that was "under" the character before the denoising. If you change the pose or the position, you're gonna stumble on the fact that denoising is too chaotic to have that kind of consistency for the details. There is going to be SOME consistency because of the conditionning, but not enough to make the same exact character. If you can provide the perfect embedding, then yeah it's going to be terrifyingly consistent, but there's no way to do that other than with loras and textual inversion(and other methods) right now, which is somewhat expensive, and hard to do if you're making a detailed OC.
@jorgenascimento218 Жыл бұрын
@@pladselsker8340 thanks for the detailed reply. Actually makes a lot of sense, because even with the same seed and prompts, the image changes a bit. When we do that changing the pose, we add even more chaos to the process
@chyldstudios Жыл бұрын
Awesome!
@abhishekvishal9132 Жыл бұрын
Sir in my stable diffusion where we write prompt, why it is showing 75 words only and in all of yours video I see prompt limt to 150. Any solution sir.
@pogiman Жыл бұрын
the model not showing up in my model drowdown
@zengrath Жыл бұрын
my problem with inpainting a lot in an image is each inpainting degrades the rest of the image, I am on AMD and a 7900xtx so i have no way of using automatic111 but I've used mage and also Onnx on windows and from what I been told there just no way to stop rest of image from degrading as it happens when image passes through the VAE's. So it's frusterating when I can only inpaint something maybe once before unacceptable image quality loss. I guess Nvidia or automatic1111 doesn't have this issue. Hoping to get controlnet implemented on one of the AMD solutions soon so I can test this out and find out for certain if this is still an issue or not.
@lefourbe5596 Жыл бұрын
i feel the pain :(, sure these AMD card are fast with plenty of vram ... but that lackluster support... this must be infuriating.
@RealShinpin Жыл бұрын
Hey man, love your videos. This shit is EPIC. Do you have a discord for the channel?
@senpaim7957 Жыл бұрын
i need link for inpainting model
@hrsca595 Жыл бұрын
I love this but I think I'm missing the point of why or when would you use this type of images for? Can you do a follow up video giving us real examples of why and when you may want to use this image? 👍😃
@pladselsker8340 Жыл бұрын
Usually, when you make mangas or stories in general, you want a character sheet like this so that the way your character looks doesn't drift away over time. If you're going to make a consistent character, you want to make sure you know how it looks. Then, you can draw it over and over, and refer to this character sheet if you're in doubt.
@lefourbe5596 Жыл бұрын
it's also a great way to see if you have trained a character properly with these differents poses. If a character is consistant, then it is good for making frame by frame animation. it is also gold to have that as a sculpting character base reference.
@OriBengal Жыл бұрын
Now they came out with an update "MultiControl" - so you can run multiple Control Net Models at once.
@Aitrepreneur Жыл бұрын
Check my last video ;)
@OriBengal Жыл бұрын
@@Aitrepreneur don't be silly... I check ALL your videos (and I don't think there's another KZbinr I can say that about)
Following along with this in my automatic1111. For some reason, inpainting seems utterly disinterested in color prompts. I can't figure that out. I seem to have copied your settings perfectly, but it just won't register my text inputs at all. Install the inpainting model as well.
@SkateBits Жыл бұрын
My open pose is not working I get this error RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 9, 64, 64] to have 4 channels, but got 9 channels instead
@aipamagica1 Жыл бұрын
Are you in the right tab? I got the error when I was accidentally in the img2img tab. Need to be in the txt2img tab.
@SkateBits Жыл бұрын
@@aipamagica1 Thanks!
@gjakuipers Жыл бұрын
wow! thx!
@Aitrepreneur Жыл бұрын
Enjoy!
@nothingrhymeswithferg3744 Жыл бұрын
this is great but you didn't get a full turnaround! she's facing away twice - this is the problem I keep having as well, would be awesome if anyone can resolve this issue?
@KageBlink Жыл бұрын
One problem I have with this, is that there are two back facing characters, and it usually never generates someone facing forward xD
@leafdriving Жыл бұрын
....then just select your 12GB card settings (1024x1024) or your 6GB card settings (768x768) ~ or it will crash hard with out of ram errors. :)
@Aitrepreneur Жыл бұрын
I mean it should work even without that, it will just take longer :)
@damarh Жыл бұрын
where the prompts ?
@ywueeee Жыл бұрын
now do this with dreamlike photo real or other realistic models
@cybermad64 Жыл бұрын
Cool process! But your example doesn't work, you dont have any front pose :P
@Aitrepreneur Жыл бұрын
Yes the third pose sometimes create the character from the back, you need to try it out multiple time to get it right sometimes
@PuffyPythonAI Жыл бұрын
it isn't exactly necessary to add "character sheet" to the prompt also
@gabe22222 Жыл бұрын
I'm sorry for the 19 sec late
@Aitrepreneur Жыл бұрын
No worries you'll do better next time :D
@FLEXTORGAMINGERA Жыл бұрын
bruh none of the tutorial works for control net it gives memory error
@Aitrepreneur Жыл бұрын
Have you asked on the discord for help?
@GreyMASTA Жыл бұрын
Not a single angle was facing the camera (expect for the portrait). That's not a good character sheet.
@alexsanders8881 Жыл бұрын
please colab + civai
@李瑞和-d5m Жыл бұрын
it just generate 4 different person, how stupid this is?