Finally someone that actually explains properly this feature. Thanks man, great video!
@ExplorewithZac18 күн бұрын
Wow finally someone makes this clear... I have been searching all day for a source of truth on how this process works in terms of encoding and decoding. So the processes are essentially the same except that the Masked Only option basically applies a crop and scale, and then reverses that crop and scale when it's finished. Whole Picture: Benefit of Context (because it can encode the entire image) Masked Only: Benefit of Detail (because it aligns the scale with what these models expect)
@theubie18 күн бұрын
Yup, that's pretty spot on.
@frankmuller4999 Жыл бұрын
Started with Stable Diffusion a few days ago. I am getting so much more from the smaller accounts. Thanks man.
@flack3 Жыл бұрын
FINALLY! Took you only 5 minutes to explain something that is seems so obscure everywhere online lol Thank you
@theubie Жыл бұрын
This is a topic that was coming up constantly in the /r/stablediffusion discord. Glad the video I made helped.
@kurtlindner Жыл бұрын
Damn, this was an amazing, and clear, explanation. Thank you!
@voEovove Жыл бұрын
Best explanation for inpainting faces that I have been able to find! Excellent tutorial. Thank you! Looking forward to seeing more from you.
@theubie Жыл бұрын
Glad it was helpful!
@mulfunction Жыл бұрын
This is really helpful and easy to understand, thank you!
@DragonstoneWolfe Жыл бұрын
This makes inpainting so much less scary, thank you so much for this tutorial. ❤
@Hasblock Жыл бұрын
Crazy how you were able to explain this concept so simply. No one else out there is doing that. I hope in a year, your channel has blown up because you deserve it.
@AndyHTu Жыл бұрын
Bar done the best explanation of ONly masked adding on the internet even after 7 months. I was trying to figure out what it does all day! Thank you for doing this. I credited you as my source on my page :)
@thelbronius Жыл бұрын
Excellent tutorial! I appreciate the comprehensive and concise explanation of something that had me blindly guessing.
@theubie Жыл бұрын
Great to hear!
@ozordiprince9405 Жыл бұрын
Thanks for making this. Great tutorial
@amoghvaishampayan3236 Жыл бұрын
Super useful explanation, thank you!
@trizais Жыл бұрын
Man, you are the first one that explain in simple words that stuff, THANKS!!!!
@sporospor Жыл бұрын
Please make an Auto1111 complete guide mate. You're great at explaining this \o/🤩
@theubie Жыл бұрын
I've been brainstorming how to pull that off. I like to keep the videos short and to the point. I'm thinking I might do a series of videos that try to be a "complete guide".
@interestedinstuff Жыл бұрын
Thanks for that. Good video. Nice and clear. Demos of how the changes in the UI affect the image are very useful. Keep up the good work.
@MikkoHaavisto12 жыл бұрын
Really informative, keep it up! Your video was the first recommended for term "only masked padding pixels".
@theubie Жыл бұрын
Thanks!
@9186737467 Жыл бұрын
Perfect explanation. Much appreciated!
@mrhmakes Жыл бұрын
thanks for explaining this feature of SD so clearly! I feel less of an idiot now :P
@Nutronic Жыл бұрын
I wish someone like you would do a tutorial on how to train a model with their face showing what kind of pics to use for the best results. The last tutorial I followed for doing this on 2.1 was a 1.5 tutorial and the google colabs all have different options now that nobody seems to explain, that model was trash. This could be down to the options I chose being incorrect or my images. I don't know.
@Rasukix Жыл бұрын
actual legend for this explanation
@F41nt13 Жыл бұрын
Thank you very much. It was so hard to find this info and from trial and errors it wasn't that obvious for me which option does what
@roobertmaxity Жыл бұрын
great video! very precise and good explanation!
@theubie Жыл бұрын
Thank you!
@ginokrol Жыл бұрын
this really helped me a lot thank you
@theubie Жыл бұрын
Glad it helped!
@afoo Жыл бұрын
very helpful , many thanks
@CCoburn3 Жыл бұрын
Very useful. Thanks.
@theubie Жыл бұрын
Glad to hear that!
@KidCoyle Жыл бұрын
Thank you.
@AndreyJulpa3 ай бұрын
Nice toturial, thank's man! Question, how to add you own makes from photoshop instead of drawing it?
@theubie3 ай бұрын
If you're asking about how you can take something from photoshop and add it to something in SD, I haven't found a good workflow for that inside of SD. There are methods for face swapping inside of Photoshop that would (and have) worked. If you're starting with something from photoshop and want to do the opposite of what I've done here, take an image from photoshop or somewhere else and change the person's body and background with SD, you would use inpanting exactly like I showed here, only you would inpaint everything outside of your mask, leaving the face and whatever else you've got mased untouched and regenerating the rest, although these days if you have the current version of Photoshop, I'd just use Firefly (Adobe's model) to generate directly in photoshop and skip SD completely, unless I needed a specific SD model.
@joywritr2 жыл бұрын
This was helpful, thank you very much. Do you happen to know what the Inpaint Sketch tab can do that the normal Inpaint one cannot?
@theubie2 жыл бұрын
The inpaint sketch tab works like the inpaint tab in that it will create a mask and then generate a new image in that mask, but it will use whatever color you draw the mask in for its initial noise. An example use would be if you drew a mask over a shirt in inpaint and asked for a green shirt, you may or may not get a green shirt. If you do the same in inpaint mask and use a green color for the mask, you're starting with that color and thus might get more of a green shirt that you're wanting.
@joywritr2 жыл бұрын
@@theubie Thank you for explaining. That was pretty much what I thought it did, but I didn't know if there were additional features I was unaware of. Take care!
@go-usa Жыл бұрын
cool. but how did you train a model with your own images of your face? a tutorial about that, would be dope. Thanks a lot and keep rocking, cheers
@theubie Жыл бұрын
That is another area that is changing rappidly. Almost as soon as you make a video for that, things change. I'll look at the method I used (via a Google Colab) and see if I can make a video for it.
@thefulcrum Жыл бұрын
In the meantime, Olivio S has a pretty good tutorial kzbin.info/www/bejne/b3_YZqeLoZeth9k Maybe TheUbie and Olivio could team up. They are both great at explaining!
@GabrielEugenio87 Жыл бұрын
Hello, who do you get that extra tab at the top (Inpainting conditioning mask strength] ? Thanks
@theubie Жыл бұрын
It is in Settings, under User interface. There is a text box labeled "Quicksettings list". Add inpainting_mask_weight using comma operators. i.e. "sd_model_checkpoint, inpainting_mask_weight"
@GabrielEugenio87 Жыл бұрын
@@theubie Thank you so much, Cool videos by the way.
@arunuday8814 Жыл бұрын
Hi, how do you get the inpainting to use a specific trained model? Can you pls explain? Thx
@theubie Жыл бұрын
What I meant in this case was I trained a model using picture so of myself. You can use any model for inpaint, although some models are trained specifically for inpainting. The model I was using was not trained specifically for inpainting.
@arunuday8814 Жыл бұрын
@@theubie so does webUI allow one to select the specific model while inpainting?
@theubie Жыл бұрын
@@arunuday8814 Yes! It uses whatever model you have selected at the top in the dropdown box. So you can change the model you have selected there and it will be the model used when doing inpainting. This also means you can generate an image with one model, then change to another model to do the inpainting.
@arunuday8814 Жыл бұрын
@@theubie Thanks so much TheUbie. Sincerely appreciate it
@theternal2 жыл бұрын
Did you merge sd-v1-5-inpainting model with your trained model to get better inpainting result?
@theubie2 жыл бұрын
No, the model used for the base is not a specific inpainting model. That's actually where using the only masked feature comes into play. Non-inpainting models will produce acceptable inpainting results at higher resolutions, although it won't be as easy to get good seamless results as it is with a specific inpainting-trained model. So, the results you see would be even better if I did use one. Normally, I would switch to an inpainting model if the subject isn't something that's part of a fine-tuned model.
@Danny2k342 жыл бұрын
I like the way you explained it and showed live previews as you were explaining. I'd love a similar video about the masked content options because I can't for the life of me figure them out. From my understanding, I thought fill basically replaces something and fills it with whatever you have in the prompt Original keeps whatever is there, but changes it into something different whilst respecting the pixels in original photo so for example, if you have a shirt on and you wanted to remove it, using original and prompting "ripped body" or something wouldn't work so in this particular situation, would you use fill or one of the other options? latent noise/nothing, I have no idea.
@theubie2 жыл бұрын
You're asking about Fill | Original | Latent Noise | Latent Nothing? Those refer to what the masked area is filled with before Stable Diffusion begins its generation. In a nutshell, inpainting is the same as img2img, only instead of the entire image, it's just the masked area. Those options tell stable diffusion what to start with when you generate. Original is exactly the same as img2img. It uses the image exactly as it is under the masked area. This is usually best used when making small changes or trying to add detail to an image. Latent Noise is exactly the same as txt2img. It uses a completely random noise map generated using the seed. This is a good one to select when you're trying to completely change the composition under the mask. Fill attempts to fill the masked area with colors from the edges the mask touches, and is best suited for when you're trying to use inpainting to remove something. Latent nothing is basically the same as drawing a selection on an image in photoshop and hitting delete. It turns the pixels in the masked area into empty data. I honestly don't have any good use cases for this one, but it could be useful for someone I guess. I'll put this on my list of videos to look into making.
@CrixusTheUndefeatedGaul Жыл бұрын
@@theubieIve used Latent Nothing to inpaint objects into an image that werent in the original prompt. Works pretty well so long as the shape of the mask roughly resembles whatever youre trying to inpaint
@thefulcrum Жыл бұрын
@@theubie Thanks for your generous descriptions and explanations. A1111 also has some tooltips for these options now
@ProdByGhost Жыл бұрын
thanks
@theubie Жыл бұрын
You're welcome!
@jocke8277 Жыл бұрын
is this a fine-tuned version of SD? as you can reference yourself like that in the prompt? it basically looks like dreambooth with inpainting but I didn't know that was possible
@theubie Жыл бұрын
I was using a model trained on a dozen or so images of my face using Dreambooth. The base model was SD 1.5 that it was trained against.
@jocke8277 Жыл бұрын
@@theubie cool! did it work good for doing stuff like face/head swapping on images? feel like it should be possible to do
@bunnystrasse Жыл бұрын
Thanks beo
@therookiesplaybook Жыл бұрын
What video card do you have to be getting images back that fast?
@theubie Жыл бұрын
I've got a 2060 Super. Not exactly the most powerful card you'll see being used, but it does the job well enough.
@therookiesplaybook Жыл бұрын
@@theubie Thanks. Could you be more specific on brand etc? Thanks.
@theubie Жыл бұрын
@@therookiesplaybook Sure. It's an EVGA RTX 2060 Super with 8GB of GDDR6. I think new they run about $500 right now.
@therookiesplaybook Жыл бұрын
@@theubie How big can you got on your images. I do anything over 1000 x 1000 and it runs out of memory. Is that video card issue or something else?
@theubie Жыл бұрын
@@therookiesplaybook I rarely generate the inital images over 768x768. After I work an image in both SD and Photoshop where I want it, I use upscaling to push it into the 2k to 4k range. SD Ultimate Upscaller is wonderful for that.
@generalawareness101 Жыл бұрын
I have never been able to get inpaint sketch to work as I get this error most times - ValueError: Coordinate 'right' is less than 'left'. I even disabled all extensions and watched a vid, but I have always had this error with it.
@theubie Жыл бұрын
I honestly do not use inpaint sketch, as it doesn't fit in with any of my workflows. I hear that if you are using DuckDuckGo Privacy Essentials, it will break inpainting. github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1044 I would look at your browser extensions as possible problems.
@generalawareness101 Жыл бұрын
@@theubie Don't use duckduckgo for anything and I have absolutely no extensions in the browser I use solely for automatic1111
@theubie Жыл бұрын
@@generalawareness101 Not sure then. There are multiple issues both open and closed with that error on the github. Most of the solutions are privacy extensions causing issues, but I didn't read all of them. I just did a quick skim.
@generalawareness101 Жыл бұрын
@@theubie Yeah, and a friend of mine I asked, and he said he tried it once, and it made his computer overheat and the fans all revved up so he hasn't touched it since.
@joelsjunk239 Жыл бұрын
what gpu are you using?
@theubie Жыл бұрын
RTX 2060 Super with 8gb of vram
@inyeung6481 Жыл бұрын
I still don't quite understand what is the difference between Inpaint area - Whole picture - Only masked Which one should I use if I am trying to replace the background of a product image?
@christophhaas5696 Жыл бұрын
So if use whole picture it will reproduce the whole picture, but changing only your selected area. With only masked area it will only focus on the said masked area. So it basically has a higher resolution for a face or samething like this as its generating a full resolution fave and puts it in. In the end it’s quite similar, but masked only is often more detailed. Whole picture blends in more smoothly with the rest of the image. For product image try using whole image because you want the background to blend in better. For mask mode use inpaint not masked and mask only the product you dont want to change. Its like a reverse mask. Hope the helps✨
@interestedinstuff Жыл бұрын
@@christophhaas5696 When I do only masked I end up with a mini picture in my masked area that has very little to do with the rest of the image. I'm clearly not selecting something right.
@christophhaas5696 Жыл бұрын
@@interestedinstuff thats what im saying. Use whole picture to let it blend in better. You can also try to make the mask blur higher or make the padding pixels higher when using only masked. Blur helps for a better seam and padding pixels uses more pixels of the image to create the new ‚image‘ for the masked area. I’m there if you need more help 👋
@interestedinstuff Жыл бұрын
@@christophhaas5696 Gotcha. Thanks for the help. Working now. Yay.
@m.kislov Жыл бұрын
Такая схема подходит только для небольших изменений, когда нет задачи получить какой-то особенно конкретный результат. Но вы столкнётесь с большими проблемами, если, например, захотите удалить бороду с лица человека. Потому что до ~0,59 там вообще почти ничего не меняется (если брать оригинал, а не заполнение), а после довольно быстро начинает подсовывать всякую ерунду, или, в лучшем случае, будет выбритое лицо, но совершенно другого оттенка.
@theubie Жыл бұрын
Sorry, not my language, but google translate says you are talking about not working for specific results. While you are partially correct, the video itself was not about getting a specific result, but rather an explanation of what each of the settings is for. Also, there was a specific result I was trying to get. The face that was created originally was a general face. The face created via inpainting was actually MY face from a trained model. You can get specific results on those scales, but it takes knowledge of prompting and how to deal with the settings. However, those are all out of the scope of the video itself.
@m.kislov Жыл бұрын
@@theubie гугл переводчик - топ! :) Я вас понял. Я обратил на это внимание тех, кто может быть искал именно про конкретные результаты. Если у вас есть такая информация, хотелось бы про получение конкретных результатов. Например: убрать/добавить бороду, поменять цвет кожи лица (не поменяв, при этом, форму рта, носа, глаз и так далее), сменить одежду и т.д. Особенно актуально, если пытаешься сделать композицию с нуля.
@PkKingSlaya Жыл бұрын
Jesus christ thanks for this
@researchandbuild1751 Жыл бұрын
Inpainting pisses me off so much because it never works like people show
@aegisgfx Жыл бұрын
So Im confused, under what circumstances would you ever mash something, then generate the entire image? Wouldnt you always want to generated the masked area only??
@theubie Жыл бұрын
Whole image is the old default behavior. I can't personally suggest any really great use cases for it, although I'm sure some people probably have reasons for it.