Quickly fix bad faces using inpaint

  Рет қаралды 24,524

Bernard Maltais

Bernard Maltais

Күн бұрын

This is a short tutorial on how to fix bad faces in stable diffusion using the inpaint feature.

Пікірлер: 57
@thenewfoundation9822
@thenewfoundation9822 Жыл бұрын
Finally someone who gave a clear and working instruction on how to improve faces in SD. Thank you so much for this, it's really appreciated.
@MyAI6659
@MyAI6659 Жыл бұрын
You are one of the few people on youtube who actually know what he's talking about and how SD works. Much love Bernard.
@exile_national
@exile_national Жыл бұрын
512*512 and keep original instead of fill in masked saved my day, thank you Sir Bernard! you are indeed the VERY BEST!
@AscendantStoic
@AscendantStoic Жыл бұрын
Learning this trick is quite a game changer.
@anuragkerketta6708
@anuragkerketta6708 Жыл бұрын
your tutorial did the job for me thanks a lot and subscribed. post regularly and i'm sure you'll get lot of followers quickly.
@metasamsara
@metasamsara Жыл бұрын
thank you, really clear and concise tutorial
@kritikusnezopont8652
@kritikusnezopont8652 Жыл бұрын
Amazing tutorial, thanks! Also, while watching it, I noticed the VAE and hypernetwork selection options on the top. I'm just wondering if that is an extension or something, because I don't have those options on my Automatic1111. Where can we find those please? Thanks!
@mrzackcole
@mrzackcole Жыл бұрын
I also noticed that. Can't find it in extensions. Would love to know if anyone figures out where we can download it
@Ilovebrushingmyhorse
@Ilovebrushingmyhorse Жыл бұрын
the 512x512 inpaint method helped dramatically, but i think the denoising shouldn't be so high if you want to maintain a similar image to before, the less you want to change the lower you should make it. i set mine all the way down to 0.01-0.05 just to add a little detail sometimes. also as far as i know keeping the same prompt isn't always necessary.
@Doop3r
@Doop3r Жыл бұрын
I'm running into an odd issue. I'm following every single step to the T, and sometimes it works just as show here...other times instead of just the face in the masked area I'm getting a scrunched up version of the entire photo in the area instead of just the face.
@sestep09
@sestep09 Жыл бұрын
lowering the Denoising strength worked for me when i had this happen.
@baobabkoodaa
@baobabkoodaa Жыл бұрын
I'm unable to reproduce similar quality results. Can you share more details on what you did to achieve this level of quality? Are you running in half precision or full precision mode? Did you toggle on the "color corrections after inpainting" option in settings? Where did you get the Lora model for this? I tried all the Ana De Armas Loras in Civitai, but it looks like the one you used in this video was not on Civitai. I suspect that your Lora model is the "magic" here that allows good inpainting results, possibly in conjunction with some settings you have toggled on.
@mkaleborn
@mkaleborn Жыл бұрын
Not sure I can help on the Lora side. But with my vanilla Automatic1111 and custom Checkpoint Merges, I had good results with this workflow: 1. Generate a prompt2image of a lady standing in some wooded/natural setting. Medium distance with a face that was decidedly 'sub-optimal' (I purposely did not do Hi-Rez upscaling). 2. I Upscaled that original 512x768 image in the Extras tab - 2.5x, ESRGAN_4x (I've switched to this from SwinIR_4x), no other upscale settings changed (all default) 3. I copied my entire Positive and Negative Prompt from the Prompt2Img tab over to Inpaint. Then copied my newly upscaled image to Inpaint. Same as he did in his video. 4. I Masked out the model's entire face and a little bit of her hair (but not all of it) 5. Sorry for the ugly formatting, but here are my Inpainting settings: Resize Mode: Just resize Mask Blur: 4, Mask Mode: Inpaint Masked (all defaults) Masked content: **Original** - I'm pretty sure *this* is the critical setting that needs to be selected for this to work. It keeps the original 'bad' face as a reference for general 'composition' when drawing the new face. Otherwise it will try to render the *entire* prompt, body and all, or just doesn't work properly. Inpaint Area: Only Masked (for the reasons he stated in the video, you only want it to focus on rendering your Masked area at the resolution you select below) Only masked padding, pixels: 64 - After some tests, I doubled the 'padding' value from 32 to 64. I found this helped the AI to 'see' the surrounding colour palette better, allowing the new face to 'blend in' better with her neck, shoulders, and overall skin tone Sampling Method: Euler A (same as prompt2img sampler), Sampling Steps: 60 (same as prompt2img) Width: 512 Height: 512 - for the exact reason he gave in the video CFG Scale: 7 (same as prompt2img). I didn't play with this setting, but I think it's fine left at the same level as your original render Denoising strength: 0.3, 0.7. My first attempt was at 0.7 and it was 'ok' or 'roughly Acceptable'. Until I lowered it down to 0.3 and tried again - I had much better results. A more natural fit for her neck and head position. Basically it used the original 'ugly face' as a closer reference point, but was able to render the whole face at 512x512 resolution Seed: -1 Restore Faces: Checked (I did not try it unchecked) And that was it. I think the flexibility probably comes with Denoising and CFG in how the image will look, and what variety you get with multiple renders. But a lower Denoising with a suitable "Only Masked Padding" set high enough to 'see' the surrounding area seemed to really help me get a face that blended in nicely with her body and the overall colour palette. Anyway, that's just my very brief and quick experience trying to fix some images that had 'broken' faces at medium / far model distances. Hope it helps!
@markdavidalcampado7784
@markdavidalcampado7784 Жыл бұрын
@@mkalebornIm gonna try this now. It looks promising! My greatest problem with inpaint is that blurry artifacts is too clear to see when image is upscaled. Any repair for that? sorry for my english. I wrote this for almost 15mins
@kneecaps2000
@kneecaps2000 Жыл бұрын
You must set the inpainting area to "mask only" and also set the resolution to 512x512.
@whyjordie
@whyjordie Жыл бұрын
this works!! thank you! i was struggling so hard lol
@arunuday8814
@arunuday8814 Жыл бұрын
Hi, I couldn't understand how you linked the inpainting to a specific custom model. Can you pls help explain? Thx a ton!
@MonkeyDIvan
@MonkeyDIvan 11 ай бұрын
Amazing stuff! Thank you!
@tristanwheeler2300
@tristanwheeler2300 7 ай бұрын
oldie but goldie
@roseydeep4896
@roseydeep4896 Жыл бұрын
Is there an extension that could do this automatically right after generating an image??? (I want to use this for videos, I need the frames to come out good right away)
@JJ-vp3bd
@JJ-vp3bd 6 ай бұрын
did you find this?
@339Memes
@339Memes Жыл бұрын
Wow, I didn't know you could change the width to do inpainting, thanks
@metasamsara
@metasamsara Жыл бұрын
Yes it's not obvious esp on mine for some reason it calls the dimensions section the "resize" feature, it isn't obvious that you can pick a custom dimension for when you render mask only.
@s3bl31
@s3bl31 11 ай бұрын
Dont work for me i dont know what the problem is in the preview i see a good face but in the last step i turns back to the bad face and the output is just a even worse oversharpend face
@marksanders3662
@marksanders3662 6 ай бұрын
I have the same problem. Have you solved it?
@s3bl31
@s3bl31 6 ай бұрын
@@marksanders3662 Are you using a amd card? if so i think i fixed with --no-half in the command line. But idk since its that long ago and i switched to Nvidia.
@p_p
@p_p Жыл бұрын
how you pasted the prompt like that at 0:35?
@BernardMaltais
@BernardMaltais Жыл бұрын
I just dragged a copy of a previously generated images. The prompt and config info is stored as metadata in each images you create... so you can just drag them on the prompt feild and load them back in the interface that way.
@p_p
@p_p Жыл бұрын
@@BernardMaltais wait... whaaat?? ive been dragging into PNG info tab all the time for nothing lmao thanks you!
@sarpsomer
@sarpsomer Жыл бұрын
Another great step by step tutorial from you. Can someone explain what is "masked only padding, pixels =32" value for?
@stonebronson5
@stonebronson5 Жыл бұрын
As I understand, it is the area around the masked region that is looked up when it generates new image. So if you set it higher it will try to blend in better, if you set it lower, then it will make more drastic changes. Padding value only works when you set "Inpaint area" to "Only masked" since in the "Whole picture" mode the padding expands to the whole canvas.
@sarpsomer
@sarpsomer Жыл бұрын
@@stonebronson5 This is so helpful. It was similar to padding in design terminology; ex. css padding. Never thought about that.
@kneecaps2000
@kneecaps2000 Жыл бұрын
Yeah it's also called "feathering" ...just a gradient on the edge to avoid it looking cut and pasted in.
@Sandel99456
@Sandel99456 Жыл бұрын
The best informative tutorial 👌
@hatuey6326
@hatuey6326 Жыл бұрын
magnifique enregistré dans mes favoris ! Cela va tellement améliorer mon workflow !!! Merci !!!!!
@Sandel99456
@Sandel99456 Жыл бұрын
Is there a kohya documentation for the settings and what do they do
@syno3608
@syno3608 Жыл бұрын
Can we replace the face with a face of another Lora ?
@androidgamerxc
@androidgamerxc Жыл бұрын
how does you have sd vae aand hypernetwork
@testales
@testales Жыл бұрын
Ok, now we only need a way to do this with hands in let's say under 100 attempts. ;-)
@subashchandra9557
@subashchandra9557 Жыл бұрын
You can use controlNet for that
@testales
@testales Жыл бұрын
​@@subashchandra9557 Two weeks ago, when I wrote the comment, this wasn't common knowledge yet. ;-) Also having to create a fitting depthmap can still be somewhat labor intensive.
@progeman
@progeman Жыл бұрын
When i try this, exact the same as you showed, it tries to paint whole prompt to that small area of the face, doesnt work with me, could it be the model i use?
@progeman
@progeman Жыл бұрын
correction: needed the mask little bit more of the face, then it worked
@TutorialesGeekReal
@TutorialesGeekReal Жыл бұрын
How did you fix this? everytime i've tried it's always all the prompt drawing on that small area
@progeman
@progeman Жыл бұрын
@@TutorialesGeekReal try to lower the CFG scale, i put something like 0.4
@toptalkstamil5435
@toptalkstamil5435 Жыл бұрын
Installed, everything works, thanks!
@mufeedco
@mufeedco Жыл бұрын
Thank you. great explanation.
@yogxoth1959
@yogxoth1959 Жыл бұрын
Thanks a lot!
@goldenboy3627
@goldenboy3627 Жыл бұрын
can this be used to fix hands?
@Which-Way-Out
@Which-Way-Out Жыл бұрын
Yes
@goldenboy3627
@goldenboy3627 Жыл бұрын
@@Which-Way-Out thanks so much
@gv-art15
@gv-art15 3 ай бұрын
Thanks a lot
@syno3608
@syno3608 Жыл бұрын
Thank you so much.
@quantumevolution4502
@quantumevolution4502 Жыл бұрын
Thank you
@RikkTheGaijin
@RikkTheGaijin Жыл бұрын
thank you!
@MarcioSilva-vf5wk
@MarcioSilva-vf5wk Жыл бұрын
Detection detailer extension do this automatically
@BakerChann
@BakerChann Жыл бұрын
how does it work? i found it to download but unsure where to put it or activate it.
How to AI Upscale with ControlNet Tiles - High Resolution for Everyone!
18:16
Stable Diffusion - LoRA
1:08:17
XpucT
Рет қаралды 196 М.
黑的奸计得逞 #古风
00:24
Black and white double fury
Рет қаралды 13 МЛН
MORE Consistent Characters & Emotions In Fooocus (Stable Diffusion)
17:05
Revealing my Workflow to Perfect AI Images.
13:31
Sebastian Kamph
Рет қаралды 321 М.
The Truth About Consistent Characters In Stable Diffusion
6:59
Monzon Media
Рет қаралды 72 М.
Use Your Face in AI Images - Self-Hosted Stable Diffusion Tutorial
27:22
Modeling The White Queen as a Stable Diffusion LoRA
44:29
Bernard Maltais
Рет қаралды 6 М.
Stable Diffusion Prompt Guide
11:23
pixaroma
Рет қаралды 36 М.
ComfyUI EP04 : (Smart) Inpaint with ComfyUI [Stable Diffusion]
22:39
AI Angel Gallery
Рет қаралды 23 М.
A Complete Guide To Loras In Stable Diffusion Automatic1111
11:15
Zifu's AI University
Рет қаралды 34 М.
ReActor Face Swapping of images and Videos in A1111
8:04
How to
Рет қаралды 36 М.