Just to clarify, my goal is not to have to train lora's or dreambooth models to achieve consistency, I'm well aware of that. The problem is that it's not accessible to everyone and difficult to do for most people. Do you have any tips for consistent characters?
@blacksage81 Жыл бұрын
In addition to naming and nationality I say the Seed, or Noise Seed is super important to keep track of. I'm not 100% sure if Adetailer will generate a Seed number, but if it does that is one number to lock in(or choose for yourself) if you want a consistent Gen. I've been designing characters with SDXL in Comfy with the FaceDetailer via the Impact Nodes Addon. With the text2img workflow I'm using, and as long as the Face Detailed Node's Noise Seed is the same, I get about anywhere from 75-90% consistency with my gens.
@MonzonMedia Жыл бұрын
@@blacksage81 Great advice for sure! I've yet to try this out on ComfyUI but I have been experimenting with SDXL and ADetailer as well on A1111. Appreciate you pointing that out!
@ne99234 Жыл бұрын
interesting technique with the long name. from my experience this really shines in img2img. prompt a name, nationality, body type, etc let's you convert almost any image into "your" character and background with low denoise strength ~ 0.4-0.65. doesn't work with every image/pose, but a great way to get a lot of images ...which could also be used to train a lora further down the line.
@GamingInfested9 ай бұрын
making embedding of one model, having parameters somewhat consistent, icbnp ckpt
@hehe42069-k8 ай бұрын
haha love how you point out not to notice the hands on your first gen and yet they're nearly perfect, something i pretty much never get my first time around.
@MonzonMedia8 ай бұрын
😊 AT this points hands on AI images is like a meme...hahaha! That being said, it's much better these days and at least there are ways around it. 👍
@DasLooney Жыл бұрын
Just finished the video. Will be experimenting before long with control net. Bookmarking this one for when I do. Glad you touched on what few people do which is that 100% perfection with stable diffusion is really not possible. The whole thing is being as close to possible with to the original which you touched on right at the beginning! Well done!
@MonzonMedia Жыл бұрын
Hey dude, really appreciate the support! I've watched many a video to see if I'm missing something but every time I'm left somewhat disappointed. I would rather people just be honest and say "we can get close to consistent results but not 100%" but as a content creator I get that they want the click and the view 😬 We are close to getting that consistency though and considering where we were a year ago, we've come a long way. I've got some tips and hacks to get the consistency but of course I'm looking for the easiest and fastest way possible. AI is suppose to help with getting things done easier and quicker, not paying $$$ for GPU's just so we can train models and Lora's to achieve consistency 😂
@DasLooney Жыл бұрын
@@MonzonMedia You're welcome. Yeah a lot of people out there don't realistically state what these programs can do. It's as frustrating as trying to replicate something someone did only to find out they lied about the steps or invited like mad.
@MonzonMedia Жыл бұрын
I hear ya man, well...you can count on me telling it like it is. 👊🙌
@emileklos Жыл бұрын
Very nice way of explaining, simple yet detailed. I just started with ai generation, and regarding face consistency, i use after detailer with a lot of success, but usually only to the face. I’ll add controlnet to the workflow for hopefully more consistency in the clothing. The last challenge would be a consistent environment. If i describe a location, it will still give me a variety of backgrounds that don’t really match in consistency
@MonzonMedia Жыл бұрын
You're welcome and glad you got some value out of the video. Yeah that's another tricky thing however there are some workarounds that can work but it's sort of limited too. I'll cover it in a video to come soon but a few things to consider, utilizing the same seed can help, also using a bigger aspect ratio and prompts like character turnaround, character sheet, multiple positions can get some decent results however repeating it several times can be the challenge. I'm slowly getting there though. Will share more soon! 👍
@tengdahui Жыл бұрын
I have a better way to achieve the consistency of the environment and characters
@babluwayne380210 ай бұрын
how?? @@tengdahui
@Shabazza84 Жыл бұрын
Love it. It's often saving me from having to train a LoRA.
@MonzonMedia Жыл бұрын
Yes exactly, it's not perfect but it helps a lot. Also there ControlNet IP-Adapter does something similar. Will be doing a video soon.
@jdesanti76 Жыл бұрын
In the equation for consistent characters, I use variables like age and body type, that helps a lot.
@MonzonMedia Жыл бұрын
Yes, that's a good point as well. Not sure if you noticed in my prompt I used "40yr old" because realistic vision tends to make woman too young sometimes hahaha! So I used that to balance it out 😁
@freeEnd_11 ай бұрын
true lol, i type "20 years old woman" and it makes like 14 years old girl for some reason@@MonzonMedia
@BabylonBaller Жыл бұрын
Appreciate the walkthrough my friend
@MonzonMedia Жыл бұрын
You're welcome! I'm overdue for a follow up video on this...stay tuned! 👍
@DrSid42 Жыл бұрын
Mixing many random names will give you default average model face. Every model has one. It is affected by race and age, but it is there. If you want different face, I suggest mixing celebrities .. 2 are usually enough, give them weight 0.5 .. or do XYZ plod spread to find what you are looking for. Not only is the face consistent, you can also control facial features this way.
@MonzonMedia Жыл бұрын
I've touched on using celebrity's on previous videos before and also I didn't want to go in-depth on faces on this video. However if you notice the names I use has parts of "celebrity" names which is another sort of hack I discovered especially if they are fairly well known. "Dobrev" for Nina Dobrev by changing the first name some models will still pick up certain traits of that person. Besides, I've been meaning to do an updated video on that as well. 👍Good point nonetheless.
@DrSid42 Жыл бұрын
@@MonzonMedia yeah ..but you have to experiment a lot with those names .. some have strong associations .. but most don't.
@henrysingletaryАй бұрын
I'm not very tech savvy so I tried this method (using controlnet extension and realistic vision as my model) and my results were not good for some reason. The image looked weird like grainy, not very clear. I don't know the proper words to describe it. But then I tried something called the "ReActor" extension and it worked like a charm! The face is like 90% consistent like you said. Have you heard of the "ReActor" extension before?
@MonzonMediaАй бұрын
Yes of course, I use something similar called Faceswap Labs. kzbin.info/www/bejne/fJjdk5SEiM-srKssi=gquZwlKmjWQLKnPr Not sure what your issue is though. Also Realistic Vision is an SD1.5 model, try SDXL or even Flux if your system can handle it.
@GhettoDragon_10 ай бұрын
Can this also be done with FOOOCUS? If so what are the best base model refiner and lora to use ..
@DrDaab Жыл бұрын
Wow, another great tutorial. Who would think that using non-existent names would be really helpful? One of the many errors I and many others got with Roop install is that a component was deprecated with link to read some technical info. Not useful to those of us who need 1 click installs that you explained so well. In addition to Roop, there are other projects that do the same thing (FaceSwapLab, sd-webui-roop, Gourieff / sd-webui-reactor etc).
@MonzonMedia Жыл бұрын
Hello my friend, always great to hear from you. Yes, using names will help shape the look and often times I may use a celebrity last name to give it a similar characteristic. So I looked around and it seems that Roop may not be supported going forward and even if it is, it's not a very reliable extension I find. I am however using Faceswaplabs and trying to get more familiar with it so I think I will create content on this one instead.
@Onur.Koeroglu Жыл бұрын
Hey Man... Great Tutorial.. I learned some new techniques.. 😎✌🏻 Thanks 💪🏻
@MonzonMedia Жыл бұрын
That's great to hear! There is more to come as I want to focus a bit more on this subject. Glad to got something out of it and appreciate the feedback. 👍
@WetPuppyDog Жыл бұрын
First off great video. I love your pace and explanation of your process. I have found great consistency in my models and images. However, I am finding a great deal of degradation on the quality of my images that I produce. Creating the initial reference image is clean and sharp but the images derived from ControlNet come out less than great. Is there something I'm missing? I have double checked my settings and even paused your video to compare. I'm using ControlNet v1.1.411 and SD 1.6 for my workflow.
@MonzonMedia Жыл бұрын
Hey there, appreciate the feedback and comments! You know, the more I used the reference only controlnet I started seeing this too however I wasn't able to find the root cause. I'd try to duplicate it then it would go away. I'm going to do more tests but I have my suspicions as to why that happens but I want to be 100% sure that I can recreate it. I find switching models, then back tends to get rid of it. It's very peculiar. With that being said, I'm outlining a video using the IP adapter that works very similar to this method that you may want to watch when it's out. 👍
@zoezerbrasilio241910 ай бұрын
Can you do a similar video of achieving great consistency including clothes but using Fooocus instead? What should I do in that case?
@MonzonMedia10 ай бұрын
It's pretty much very similar, will be editing that video next.
@akiozfn669426 күн бұрын
When generating photos of real people, is there an option to be more accurate like in skin color, hair type etc without extra prompt fixing it?, because currently it can generate a photo with completely different hair type and color but same face, unless I fix it through the prompt
@javadrip11 ай бұрын
how does giving the character names help? does SD continually learn from the text input?
@MonzonMedia11 ай бұрын
It's helps with keeping the face consistent. Each model tends to have a default "look" so naming them, giving them an ethnicity helps to shape the face differently while keeping the face consistent. Some models have a stronger default look than others.
@syu485 Жыл бұрын
Hi! The hands in your pictures were normal. How could you do that? Is it owing to the pre-trained model? I used other models and always get weird fingers.
@MonzonMedia Жыл бұрын
Hey there, yeah it's always good to start with a good model that does hands well, Realistic Vision does a great job of that, I mean it's not perfect but one of the better models that can do hands pretty decently. Some of the images I may have done some minor inpainting but not too many of them
@syu485 Жыл бұрын
@@MonzonMedia Got it! Thanks for your response.
@MonzonMedia Жыл бұрын
You're very welcome! I've been working on a video on the topic of hands, I'm just trying to see all the different approaches that we have at our disposal. Hopefully the first video I can get done by next week as it will have to be maybe at least 2 videos. Stay tuned!
@maryjanechukwuma97076 ай бұрын
Please I need your help. I have two pictures and I want to change the pose, I want picture A to have the same pose with picture B.. Please how can I do it
@lilillllii24611 ай бұрын
Is it possible to apply clothes and have them look exactly the same when they are slightly different?
@MonzonMedia11 ай бұрын
Pretty much the same process but it's still difficult to get them exactly the same. You'd have to generate a lot of images to get some that look similar. I'll be doing a follow up on this very soon.
@BrettArt-Channel4 ай бұрын
This is a good place to start, Saved me a lot of time.
@edtomlinson1833 Жыл бұрын
What if you created your own Dreambooth model using a set of pictures of the same person. How do you generate consistent characters using that model? I am having trouble with this.
@MonzonMedia Жыл бұрын
Dreambooth models will only really help with faces and and body, clothes attire will still be random. Training Loras is a way to get close to consistent clothing but still not 100% but pretty close
@augiestudio Жыл бұрын
Great video! You'll have to give Augie a try sometime :)
@타오바오-h8l Жыл бұрын
Thank you always. I succeeded in changing my face through roop, is there a way to change my outfit and hairstyle naturally?
@awais60447 ай бұрын
You find any solution?
@GeorgeLitvine Жыл бұрын
Hi MM! Could you please teach us a similar technique when we have two characters, in order to keeping consistency for both?
@MonzonMedia Жыл бұрын
Absolutely thanks for the suggestion. It would be a similar process though, but it’s a bit more tricky
@GeorgeLitvine Жыл бұрын
Hi MM! Thank you for interest in that suggestion. Would you, please, do it when you get time to do so?@@MonzonMedia
@AbdullahKhamis-b6x Жыл бұрын
I did not understand what will happen when i close SD and reopen it again. how can i get the same character again? What was the role of giving a name to the character?
@MonzonMedia Жыл бұрын
You’ll have to use a similar prompt and exact same settings to get the same character, only changing the environment. I have a follow up video on this coming soon. Naming the character will keep the face consistent.
@AbdullahKhamis-b6x Жыл бұрын
thank you! We are waiting for the new video.@@MonzonMedia
@nefwaenre Жыл бұрын
Thanks so much for this video! i have a question: is there a way to completely change the shirt someone is wearing or to add a shirt on a guy who's not wearing it, without changing the pose? i tried that using controlnet openpose (adding a white shirt on a guy who doesn't have any) but it just keeps creating more half naked men's pictures. And when i go to change a shirt colour from it's original colout to whatever i want, if i set the weight high then it botches the entire pose. Any work arounds, please?
@MonzonMedia Жыл бұрын
Absolutely, you can just use inpainting and mask the area you want to change. play around with the denoising strength for more variation.
@ne99234 Жыл бұрын
for this kind of task i like to create a controlnet canny image, and use an image editor to paint out the stuff that i want to change with black. (at this step you can also paint in new details with a fine white line.) then use the new canny image and prompt a tshirt. because the canny image does not have information that there is a naked torso, everything that's black can be changed with the prompt, for example background, clothing colors, hair color.
@Slav4o911 Жыл бұрын
Try "Inpaint Anything" extension it has built in mask tool to select clothes or other object in the scene and change them.
@jason-sk9oi Жыл бұрын
Pro tips 👌 😎
@MonzonMedia Жыл бұрын
You're welcome, much appreciated! 👊👍
5 ай бұрын
thank you
@MonzonMedia5 ай бұрын
You're welcome
@falsettones Жыл бұрын
Hold on, this is like the ones you sent on the messenger group, right?
@MonzonMedia Жыл бұрын
Sort of 😬😊 but yeah, very easy and quick to do now. 👍🏼
@falsettones Жыл бұрын
@@MonzonMedia . This is so fascinating. XDDDD
@MonzonMedia Жыл бұрын
@@falsettones it really is!
@JAMBI..9 ай бұрын
Can you just transfer ones soul into the virtual self?
@Clayden11 ай бұрын
How do you fix the hands???
@MonzonMedia11 ай бұрын
I'll be covering this soon. Starts with a good model, Lora's help, there is also controlnet.
@JoeJon3s9 ай бұрын
i get an error when running the last command?
@ai_and_gaming Жыл бұрын
Great tutorial
@MonzonMedia Жыл бұрын
Thank you! More to come on this topic soon 👍🏼
@raditedite Жыл бұрын
Did u notice overexposed result after using controlnet as references ? I've tried some pictures and it make overexposed result. How can i fix that? Is it because of VAE or something ?
@MonzonMedia Жыл бұрын
Now that you mention it I did have an issue with under exposure randomly but I just thought it was a one off type of thing. I ended up switching models as I was just experimenting anyway and it didn't happen after. I haven't been able to recreate it to see what the issue is. Does this happen on a consistent basis? Can you recreate it?
@Slav4o911 Жыл бұрын
You should lower the strength of the controlnet. 1 is default but is usually too high.
@raditedite Жыл бұрын
@@MonzonMedia yup, still an issue for me when using realisticvision5.1vae
@AliHaidari1343 Жыл бұрын
Hello my dear friend good night and very nice video L ❤😂❤😂❤😂❤😂
@MonzonMedia Жыл бұрын
Thank you sir 👍🏼
@SangeetaYadav-bd8kf3 ай бұрын
What a software use ?
Жыл бұрын
Good video, Thx.
@MonzonMedia Жыл бұрын
You're welcome! Will be following up on this video soon! 👍
@ExplorewithZac10 ай бұрын
A hypothetical name just directs the seed. It does not direct the seed any more than any other descriptive word would. And therefor it is fairly meaningless to include a name, IMO. Maybe I'm missing something, or there's something I'm not fully understanding. What you could do instead is save some very KEY descriptive words in a document and make sure to always use those 3-10 descriptive words along with your seed. The character should look the same every time unless you change up the Lora's you're using. Lora's cause your seed to be interpreted differently.
@ExplorewithZac10 ай бұрын
The reason you may think that using a name works, is because it will work... What I'm saying is that using a name is not as effective as using words that actually describe your character, and making sure to always use those words and the same seed.
@Gromst3rr Жыл бұрын
thanks!
@MonzonMedia Жыл бұрын
You're welcome!
@Nafanya-The-Cat2 ай бұрын
ОМГ... в этом примере за постоянство персонажа отвечат ПРОМПТ, в котором указано 2 имени и вот именно их он и ресует везде! а не контролнет.
@DVDKC10 ай бұрын
It doesn't matter since you can faceswap easily...
@MonzonMedia10 ай бұрын
Sure but you have to create a consistent face first right? Most models have a default look so you would have to tweak the look to what you want. Then yes face swap all you want and that doesn't address consistent clothing or attire.
@sachinbahukiya15179 ай бұрын
Which web site
@MonzonMedia9 ай бұрын
This is a local platform called Automatic1111
@3diva01 Жыл бұрын
Getting consistent characters, clothing, hair, and backgrounds/environments are extremely difficult unless you start with a base image. That's why tools like Daz Studio are massively helpful with character, clothing, hair, and environment consistency.
@MonzonMedia Жыл бұрын
Yes for sure, starting with some sort of base, even a simple drawing is always better for control and consistency. I'll be touching on that soon as I develop this series of videos. I think though if you can develop some simple workflows like in this video, it will only make developing consistency a lot easier once you can utilize other mediums.
@3diva01 Жыл бұрын
@@MonzonMedia I completely agree! The tips and techniques you've outlined in this video are very helpful for more character consistency! Thank you for the great video! :D
@MonzonMedia Жыл бұрын
You're very welcome! I'm glad you brought up Daz3D, I didn't use it much in the past but I'm a bit familiar with it from my Cinema 4D days. I recently started to pick it up again to use the models and other assets with controlnet to experiment with character development. Do you use it much and are you an experienced user?
@3diva01 Жыл бұрын
@@MonzonMedia Full disclosure - I'm a Daz3D PA. But I've used the Daz Studio program for years, even well before I started selling 3D assets there. I am happy though with how it really helps with character consistency. The ability to use Daz Studio renders to control the exact outfit, hair, character and environment has been really helpful with my work with Stable Diffusion. Daz Studio render + ControlNet allows for some pretty impressive control over your characters. I was surprised how useful it is to getting exactly the characters, environment, and clothing I want. Having a base image that you can control at that level is hugely helpful, IMO.
@MonzonMedia Жыл бұрын
Oh wow that's amazing! I can totally see it's use cases. I'd love to see your work if you don't mind sharing? To be honest, this is how I want to use AI, along with pictures and drawings, I find I have more control using assets versus starting from scratch just from prompting. I've been trying to make time to learn Daz as it's been so very long since I've tried it and when I was using it, it was just basic concepts. Nevertheless, I think there are some very useful assets, even the free ones just to get familiar. You will see very soon how I will be shifting the focus of this channel on more on the creative side. I mean, what's point if we can't create what we envision in our heads right?
@borutesufaibutv1115 Жыл бұрын
Si sara G yan hahaha
@MonzonMedia Жыл бұрын
😂 haha! Now that you mention it, I see the resemblance! 😊
@borutesufaibutv1115 Жыл бұрын
@@MonzonMedia nice tut idol, trying out your method ❤️
@MonzonMedia Жыл бұрын
@@borutesufaibutv1115 nice! Let me know how it goes. Just a heads up, I started getting errors with Roop and I found out that it may not be supported anymore. There is another extension called face swap lab that does the same thing and is more advanced. I'm trying it out and may do a video on it soon. 👍
@borutesufaibutv1115 Жыл бұрын
@@MonzonMedia cool! I'm actively using roop. Thanks for the heads up about the other tool, Was really looking for a new way to do face swaps tbh i think roop is good but can be improved especially the way it leverages the codeformer/gfpgan algo. Will deffo let you know sir 👌
@maraderchikXD Жыл бұрын
Easy way to figure out it's a Ai generated is that all women's jeans have fake pockets and she's can't put her hand in it. 😄
@MonzonMedia Жыл бұрын
😂 lmao right? So true! We need a negative embedding called deep pockets! Hahaha!
@Macatho Жыл бұрын
Why not just crreate a LORA of your character?
@MonzonMedia Жыл бұрын
Oh yes absolutely, I did pin a comment saying my goal is to find methods without having to train lora's or dreambooth models. I'm always looking at "easier" solutions and options for people that don't want to have to spend the time doing something like training. I do however intend on covering that as part of this series.
@3diva01 Жыл бұрын
Not everyone has a computer that can handle LORA training. Videos like this are hugely helpful for those of us on older machines or who don't have the ability to create LORAs. :)
@IceMetalPunk Жыл бұрын
I have an 8GB GPU. I technically "can" train a LoRA, but it would take literally 24 to 48 hours of training for a single LoRA with relatively few training points. If we can get most of the way there without that hassle, I'm happy.
@Macatho Жыл бұрын
@@IceMetalPunk understandable, and for some it can be a hefty $ amount, a used 3090rtx can be as cheap as $800 btw
@Macatho Жыл бұрын
@@3diva01 understandable, but it costs about 5 bucks to rent a gpu that can train a lora in less than 2 hours... So money really isnt the issue is it? Also a used 3090 rtx you can get for $800, sure that is a lot for some people I guess.
@DJGaitchs3 ай бұрын
Who still use stable diffusion 😂
@putinninovacuna89769 ай бұрын
I mean for assian people you just need a single picture cause they all look the same lmao
@MonzonMedia9 ай бұрын
Bruh! 😆
@sitr2516 Жыл бұрын
The truth? I demand Lies sir! lie to me!!!!!
@MonzonMedia Жыл бұрын
LMAO! 😂Ok well, the truth is...these are real photos! Deformed hands and all! hahaha!😬Appreciate the good laugh man.