Best Practice Workflow for Automatic 1111 - Stable Diffusion

  Рет қаралды 253,807

AIKnowledge2Go

AIKnowledge2Go

Күн бұрын

Пікірлер: 466
@AIKnowledge2Go
@AIKnowledge2Go 2 ай бұрын
Newer Version of this Video: kzbin.info/www/bejne/raqng3uIqq2Vd7c
@NotY0urHeroo
@NotY0urHeroo Жыл бұрын
When you select "only masked" and then set the resolution lower it's only using the lower resolution for the inpainted area. The reason the image overall still looks good is because it's the same resolution. Using a higher or lower resolution while inpainting a mask doesn't have any impact on anything other than the inpainted area. Using this you can actually get more detail in your inpainted area by maintaining the high resolution (either with 1x if using rescale-by or by manually typing a higher resolution) it will generate for example the face at the selected full-size resolution and then shrink the inpaint down to fit inside the overall resolution of the image.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your insightful comment! You're absolutely right about how the 'only masked' function and resolution settings work in Automatic 1111. I must admit, there was a misunderstanding on my part regarding this. Your explanation is spot on and it's very helpful to me and, I'm sure, to other viewers as well. I will make sure to address this in a future video to correct this misunderstanding and to further enhance the learning experience for everyone. I truly appreciate your input and contribution to our community.
@freakdeer2486
@freakdeer2486 Жыл бұрын
@@AIKnowledge2Go Was that response generated by Chat-GPT? Because it looks like it hahaha
@SciFiMangaGamesAnime
@SciFiMangaGamesAnime Жыл бұрын
@@freakdeer2486 100%
@metasamsara
@metasamsara Жыл бұрын
How do you make sure that you don't go too high in resolution and clash with the larger picture in amount of details? Is there a way to calculate the actual size of the selection? I'd be curious to use x/y/z plots for the whole process to save a lot of time.
@heybillpack
@heybillpack Ай бұрын
But wouldn't the resolution make a difference to what result you get? Generating a 1024x1024 image gives you a different image compared to 512x512. So if he wanted to get something close to original as far as general image properties, it would make sense to do the inpainting at that original resolution. Unless it really doesn't matter in this particular case...
@Gabriepe
@Gabriepe Жыл бұрын
Finally someone was able to provide very descriptive and helpful tips, thank you
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I'm really glad to hear that you found the tips helpful! I strive to make my content as clear and informative as possible, so it's fantastic to know it's hitting the mark. Thanks for the kind words!
@metasamsara
@metasamsara Жыл бұрын
This is by far the most useful tutorial on stable diffusion I've run into!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you so much! I'm glad you found it helpful. Happy diffusing! 🚀
@kokamoo
@kokamoo Жыл бұрын
If you want to dump more detail use controlnet tile + ultimate SD upscale
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your suggestion! The combination of controlnet tile and ultimate SD upscale is indeed a powerful technique for getting more detail. I have plans to cover this and other advanced techniques in an upcoming part II tutorial. Stay tuned for that, and I appreciate your input!
@lelenny7725
@lelenny7725 3 ай бұрын
I got into stable diffusion last year and i had a lot of fun figuring out how everything works by myself. I'm pretty much self taught so i knew there would be some features i didn't know about so i thought i would finally look up how other people do their generations. Boy did i need to hear about the inpainting function. I knew what it was supposed to do but i could never figure out the specifics of how to get it to work. This is a major game changer for me! Thank you so much! I love the german accent by the way. You sound like a very friendly person.
@AIKnowledge2Go
@AIKnowledge2Go 3 ай бұрын
I'm so glad to hear that the inpainting function has made such a difference for you! It's always rewarding to discover new features that enhance our creative process. Keep experimenting! Thanks for the remarks on my accent. 😊
@joker3dx
@joker3dx Жыл бұрын
This is easily the best tutorial I have seen on AI. Subscribed.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Wow, thank you so much for the high praise and for subscribing! I'm thrilled to hear that you found the tutorial to be so valuable. Your support means a lot, and it motivates me to keep creating high-quality content. Stay tuned for more AI insights and tutorials!
@eikku5725
@eikku5725 4 ай бұрын
Man... You're tutorial is so good, way much better then any I saw before danke !
@AIKnowledge2Go
@AIKnowledge2Go 4 ай бұрын
I'm really glad you found the tutorial helpful! Your support means a lot to me.
@eikku5725
@eikku5725 4 ай бұрын
@@AIKnowledge2Go Well deserved ;)
@wholeness
@wholeness Жыл бұрын
What is the best computer to use for Automatic?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
The ideal computer for running Automatic1111 largely depends on your budget. If cost is not a concern, I would suggest going for a system with an Nvidia RTX 4090 graphics card and an Intel I7-9700K processor or faster. However, keep in mind that when it comes to running Automatic1111, the GPU and VRAM are significantly more important than the CPU.
@SpacenSpooks
@SpacenSpooks Жыл бұрын
How much was your computer? lol I mean, I'm utterly jealous by how fast yours works, and I don't know where to even start looking for good brands @@AIKnowledge2Go
@thaafire2503
@thaafire2503 Жыл бұрын
The img2img upscale workflow is amazing. It completely saved an image that i would have thought was complete garbage!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
That's fantastic to hear! The AI-powered img2img upscale truly is a game-changer for revitalizing images. I'm thrilled it helped you save an image. Thanks for watching and sharing your success!
@GodrykPL
@GodrykPL 4 ай бұрын
This video of yours has given me whole new level of understanding of what I am supposed to do, thank you!
@AIKnowledge2Go
@AIKnowledge2Go 4 ай бұрын
Glad it was helpful! You are welcome
@hamid2688
@hamid2688 Жыл бұрын
thats what i call a quality contect tutorial ! the tutor knows what he is doing here ...
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you so much for your kind words! I'm really glad to hear that you found the tutorial helpful and of high quality. It's comments like yours that motivate me to keep creating and sharing more content. Stay tuned for more tutorials!
@NarakuNoHana01
@NarakuNoHana01 Жыл бұрын
Hello, could you please tell me how you enabled the bars on the side of your image on 4:16? The ones that let you pull the image up and down and to the sides. I can't find anywhere on how to enable them, for me images just get squeezed smaller and for really wide images that makes it so hard to work with.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Oh, those scroll bars were actually a result of using ctrl + mousewheel to zoom in on my browser. It appears that feature might have been changed or removed in newer versions. 😞 However, I'd recommend the canvas zoom extension for a1111. You can find it under extensions -> available -> load from, or check these direct links. haven't tried it myself but it seems to do what you are looking for: github.com/richrobber2/canvas-zoom Hope this helps!
@FamilyAndKids-sk5yo
@FamilyAndKids-sk5yo 11 ай бұрын
Been doing this for a while (still a noob of course), but this is by far the most useful info I've seen. thanks!
@AIKnowledge2Go
@AIKnowledge2Go 11 ай бұрын
Glad to hear it! Happy creating.
@thexophros
@thexophros 9 ай бұрын
This video has some serious early 2010 movie maker vibes - you just know it's gonna be good and helpful!. And bam! I wasn't dissapointed - super precise & to the point. A complete workflow highlighting each step, thanks so much 👍 The german accent is just the cherry on top 🍒
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Thank you for the fantastic feedback! Thrilled to hear the video hit the mark with my content. Your appreciation, especially for the German accent, brings a huge smile to my face! 🍒 Exciting news: the next video will also focus on a workflow, this time harnessing the power of ControlNet. Your support inspires me to keep creating. Don't miss out-thanks for being an awesome part of our community!
@thexophros
@thexophros 9 ай бұрын
@@AIKnowledge2Go good to hear that I could put a smile on your face 😋 I’m totally excited for the next video as well! ControlNet is so far the one thing I haven't touched, but would love to know more about and slowly master.
@silvertides
@silvertides Жыл бұрын
I dont have a resize by option. When i go to inpaint and img2img I just have another width and height chart.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
To see the 'resize by' option, you may need to update your version of Automatic 1111. The latest version as of today is 1.3.2. I've confirmed that this feature isn't related to any extensions, as I still had access to it even after disabling all of my extensions. Please try updating your version.
@mrgraysky9343
@mrgraysky9343 Жыл бұрын
with the same settings and just some changes to the negative prompt I was able to get much better faces, limbs, and less disfigured limbs with initial generation making the later inpainting much quicker.
@AIKnowledge2Go
@AIKnowledge2Go 11 ай бұрын
You've made a great observation! This video primarily showcases this particular workflow. However, it's worth noting that when your prompts become more complex, or you're using multiple 'loras' as is often the case for creating stunning art in SD 1.5, the quality of faces tends to diminish. That’s exactly where this workflow proves to be incredibly useful.
@mkuipers4359
@mkuipers4359 10 ай бұрын
Good video and explanation! I'm just a bit confused about one thing, during inpainting is it really necessary to have an accurate prompt, for what you want specifically in that area? And what happens when you leave it blank, does it just try to "autofill"?
@AIKnowledge2Go
@AIKnowledge2Go 10 ай бұрын
Hey there! Your prompt can guide the AI for better inpainting, but feel free to experiment and see what surprises the blank canvas brings! What i didn't do in my Video is to change the prompt. I actually should have changed it. Leaving it blank can work, but i suggest you use ControlNet Inpaint.
@LexChan
@LexChan 9 ай бұрын
the inpainting part on the legs.. it give me random weired output and deform. can we somehow have a greater control on what it will be render out?
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
To have more control I suggest you use controlNet. I have a newer version of this video right here: kzbin.info/www/bejne/raqng3uIqq2Vd7c
@baraka99
@baraka99 Жыл бұрын
More video please.... I learned so much from this video.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your feedback! I'm glad to hear that you found the video informative and valuable. The next Video releases in a few minutes actually :) Stay tuned!
@salvadorgonzalez692
@salvadorgonzalez692 8 ай бұрын
I know I'm late to this video but this helped me out big time! Needless to say I'm now subscribed and following your videos.
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
Thank you for your kind words and subscribing. There's actually an improved version of this workflow you can find in this video kzbin.info/www/bejne/raqng3uIqq2Vd7c it uses control net so it's a little more advanced. Happy creating
@polystormstudio
@polystormstudio 11 ай бұрын
Looking forward to Part 2!
@AIKnowledge2Go
@AIKnowledge2Go 11 ай бұрын
Thanks so much for your excitement about Part 2! 🌟 Your wait is over: kzbin.info/www/bejne/o6O6nniNer-qetk. Just a heads-up: since Parts I and II have been around for a bit, some of the settings might have changed. I'm planning to update them as soon as I can find a slot in my busy schedule. Stay tuned and happy creating! Your support means a lot! 🚀
@polystormstudio
@polystormstudio 11 ай бұрын
@@AIKnowledge2Go Wow good timing! I just saw the video, thanks for letting me know. I already liked and posted a couple of comments!
@joeskis
@joeskis 10 ай бұрын
so there isn't a way to do inpaint and give it new prompts which it knows only to apply to the masked area?
@AIKnowledge2Go
@AIKnowledge2Go 10 ай бұрын
Yes there is. Just change the prompt. In fact that is the reason why is have this little "spider thingy" on her leg, because i did not change the prompt. Happy Creating.
@gerinja
@gerinja Жыл бұрын
I like the roboto thingy on her leg. Very cool. Thanks for sharing
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you i am glad you liked it. Stay tuned for more.
@digivagrant
@digivagrant Жыл бұрын
Is there a way to generate character art with transparency. Im thinking of just generating the character and background seperately.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
One approach is to first generate an image without a person and then use inpainting to add the character. Alternatively, you could use a segmentation model in ControlNet to achieve this. If you're looking to remove the background from an existing image with a character, using an external background removal tool would be the way to go.
@hl-co1fz
@hl-co1fz 3 ай бұрын
Hi, is it possible to remove specific objects? Sometimes the picture only has one thing bad about it and i cnat remove it
@AIKnowledge2Go
@AIKnowledge2Go 3 ай бұрын
Hi yes it is. Maybe you want to use controlNet for this. Here is a newer version of this video. kzbin.info/www/bejne/raqng3uIqq2Vd7c To remove objects just write what you want to have instead on the background for prompt when inpainting. You need to experiment with the denoising strength.
@MorganRG-ej8dj
@MorganRG-ej8dj 10 ай бұрын
That's really insightful. I don't have to throw away good images with small blemishes. Thanks!
@AIKnowledge2Go
@AIKnowledge2Go 10 ай бұрын
Glad it was helpful!
@Vigar1977
@Vigar1977 11 ай бұрын
Hey, wirklich ein sehr gutes Tutorial, danke dafür. Ich habe noch ein Problem, vielleicht kannst du helfen: Wenn ich in InPaint einen bestimmten Bereich maskiert habe, sagen wir zum Beispiel die Beine, und dann auf generate drücke, macht er mir keine neuen Beine, sondern es sieht so aus, als würde er das komplette Bild noch mal in dem maskierten Bereich erstellen. Hab die Einstellungen aus dem Video übernommen und auch mit verschiedenen Noise Strength experimentiert.
@AIKnowledge2Go
@AIKnowledge2Go 11 ай бұрын
Danke für das Feedback. Wenn er das oder ein ähnliches Bild noch einmal erzeugt in dem maskierten Bereich, hast die denoising strengh zu hoch. Was ich in dem Video nicht zeige (weil ich es damals nicht besser wusste) ist, das du dein Prompt anpassen sollst/kannst/musst. Wenn du z.b. ein Gesicht inpainten möchtest, dann schreib sowas wie "image of a face of..." was du im Prompt drin lassen sollest ist alles was das rendering betrifft (HDR, 4K, cinematic shot). Inpainten ist immer auch ein bisschen Trial and error. Hope that helps.
@Vigar1977
@Vigar1977 11 ай бұрын
@@AIKnowledge2Go Danke für deine Antwort, es lag am Prompt. Hatte ich dann später noch rausgefunden 😅
@Воды
@Воды 7 ай бұрын
best tutorial for newbies, like it
@AIKnowledge2Go
@AIKnowledge2Go 7 ай бұрын
Thank you for the positive feedback! Happy creating.
@DarkFactory
@DarkFactory Жыл бұрын
Is the workflow same for complex objects, like tentacles? Do you recommend any useful video for making AI tentacle nsfw?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
While I can't provide specific content recommendations for NSFW topics, I suggest checking out Civitai for a wide range of models and resources. They have a diverse collection of AI models that can be used for various purposes, including complex objects. Maybe try combining poses with "complex objects" and experiment with the lora strengh.😉
@cho7official55
@cho7official55 Жыл бұрын
Do you have some source to adjust our research, a lot of parameters I've seen in this video, weren't showed anywhere, I assume part of it is emperical knowledge, still, if you can link us to great indept tutorial explan,ation it would save me a lot of hardwork ! Still I learn a ton from that video, and helped me adjust some issues i've got manytimes !
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Your response is a good start, as it acknowledges the viewer's appreciation and seeks to clarify their specific needs. You could enhance it by offering direct links to your relevant videos and expressing a willingness to assist further. Here’s a revised version of your response: Hi there! I'm thrilled to hear that my video was informative and helpful for you. Regarding the settings you're curious about, could you specify which parameters or aspects you're looking into? This will help me guide you better. Also, I have two basic videos on prompting and basic settings on my channel which might be just what you're looking for. I'll drop the links here for easy access: kzbin.info/www/bejne/g5fXg5Sme5l0l7c kzbin.info/www/bejne/iXnMnICBaJ6EaZI Feel free to check these out. Also i have whole tutorial Series on stable diffusion.
@danishraza8843
@danishraza8843 Жыл бұрын
Hi, Can you explain this please. why do you need to generate first set of images with Euler A? when you can use the DPM++ 2M KARAS in the first place. first set of images generated deformed faces. I do not understand what are the benefits of doing the additional steps? thanks
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Hello! I appreciate your curiosity. The choice to start with Euler A isn't mandatory, it's more of a strategic move. Recently, I've been leaning more towards using DPM++ 2M Karras from the get-go. As for the additional steps, it comes down to the limitations and preferences in image resolutions. Stable Diffusion 1.5 typically restricts us to generating images at 768 x 512 or 512 x 768, and occasionally 768 x 768. I like to stretch those boundaries to achieve higher resolutions, like 2K or even 4K. By using the Image2Image upscale method, I feel I have a tighter grip on the upscaling process, compared to the highres fix workflow. Think of it like having a finer zoom lens for the details when you're aiming for that perfect shot in photography. It's about crafting the image with as much precision as possible. I hope this gives you a clearer picture of why those extra steps might be worth the effort!
@seddd
@seddd Жыл бұрын
I'm already feeling overwhelmed
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I'm sorry to hear that you're feeling overwhelmed. I'd suggest starting with my basic tutorial to build a foundation, then gradually progress from there. Here's the link to help you get started: kzbin.info/www/bejne/iXnMnICBaJ6EaZI.
@damianlazo5760
@damianlazo5760 3 ай бұрын
What is the skip clip for?
@AIKnowledge2Go
@AIKnowledge2Go 3 ай бұрын
Skipping layers can lead to variations in the generated images, potentially yielding results that are more aligned with your creative intent. Hope that helps.
@Larimuss
@Larimuss 10 күн бұрын
Thanks, great tutorial. 5 seconds on how the setting works woild be cool too. Though i do appreicate the explanation on some.
@AIKnowledge2Go
@AIKnowledge2Go 10 күн бұрын
Hi thanks for the feedback. if you need a more basic tutorial watch this video: kzbin.info/www/bejne/bGStn2p5o5yMaq8 and once you are more advanced i suggest this one: kzbin.info/www/bejne/raqng3uIqq2Vd7c&lc=UgzNJLxlQ4IgEaFLjJ54AaABAg Happy creating.
@Mawble
@Mawble Жыл бұрын
Just started getting into stable diffusion and this video completely changes things. Thank you so much!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I'm glad the video could help! It's amazing how much of a difference understanding stable diffusion can make. Happy creating!
@NineSeptims
@NineSeptims Жыл бұрын
adding keywords for face tend to decrease chances of getting mangled faces you will have to fix later. Try (highly detailed face, textured skin, detailed eyes)
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
That's a great tip, thanks for sharing! However, it's also worth mentioning that there will come a time when you'll want to upscale your image to enhance the overall quality. In that case, you might still need to deal with some mangled features, but hopefully fewer with your suggested keywords.
@teddywilliams2512
@teddywilliams2512 Жыл бұрын
I don't have an option to adjust clip skip... what i am missing? Thanks
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Ha, you might have missed the first minute of my video where I go over the clip skip settings 😂 But no worries! Just go back and check that section.Happy to help!
@teddywilliams2512
@teddywilliams2512 Жыл бұрын
sry! and thank you!@@AIKnowledge2Go
@teddywilliams2512
@teddywilliams2512 Жыл бұрын
oh wow.. i feel so dumb now. Okay in my defense ... I'm just dumb lmao@@AIKnowledge2Go
@Asterex
@Asterex Жыл бұрын
Sehr geiles Video. Hat mich einen echten Schritt nach vorne gebracht!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Danke für das Feedback, freut mich das es geholfen hat.
@Asterex
@Asterex Жыл бұрын
@@AIKnowledge2Go falls du gute Tips zu "Händen" hast, nehme ich die gern. 😄
@Asterex
@Asterex Жыл бұрын
@@AIKnowledge2Go youtube hat einfach meinen Link ausgeblended.. ..die biatch
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
@@Asterex versuchs mit negative embeddings wie z.b.civitai.com/models/116230/bad-hands-5 Ansonsten hat After Detailer auch die möglichkeit Hände zu "inpainten" zu after Deatiler hab ich auch ein Video. Allerdings ist da der Fokus auf Gesichtern.
@Asterex
@Asterex Жыл бұрын
@@AIKnowledge2Go das mit der negativ embedding probier ich aus! Vielen Dank!
@titerote71
@titerote71 Жыл бұрын
First of all, a greeting and thanks for the video, a beautiful image. I wanted to ask you why you didn't use the High res fix, from what I understand what it does is the same as you did but saving you a step, that is, starting an imgtoimg process. What is the reason that you advise against its use?
@vutran234_
@vutran234_ Жыл бұрын
High res fix can change many details compared to the original image
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your comment and your kind words about the video. @hoasiai is spot on. The High Res Fix does indeed have the potential to significantly alter the image, as does changing the sampler. My preference is for a straightforward workflow. I start with prompt engineering, and once I'm satisfied with the composition, I move to Image2Image to boost the quality when changing the sampler, without affecting the composition. I hope this clarifies my approach, and thanks again for your question!
@merce414
@merce414 5 ай бұрын
Great content! Thank you so much!
@AIKnowledge2Go
@AIKnowledge2Go 5 ай бұрын
Glad you liked it!
@Taygete1
@Taygete1 10 ай бұрын
This video is a gem, thank you very much!
@AIKnowledge2Go
@AIKnowledge2Go 10 ай бұрын
Thank you for your kind words, it means a lot to me!
@Aiuniqreate
@Aiuniqreate Жыл бұрын
Hello @AIknowledge2Go, this is a really helpful video. Thank you for coming up wirh such a great content. I have a quick question here and would appreciate if you could provide some valuable suggestion. I have always been having challenge in worling with image2image generation, seems like the final result is no where close to my input image. For example, if I want to make minor edit to my normal human images like change clothes color, changing hairstyle, changing hairlength etc by keeping rest of the entire details intact should i go for image2image option in SD or should i use some other method for this? I am using absolutereality Checkpoint for this to ensure the pics are realistic. Any advise / suggestion would be greatly appreciated
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your kind words! If you want to make minor edits to your normal human images while keeping the rest of the details intact, using the inpaint option of Image2Image in Automatic 1111 is a good choice. However, for specific modifications like changing clothes color, hairstyle, or hair length, you may need to experiment with different prompts and parameters to achieve the desired results. In addition to Image2Image, you can also explore the ControlNet model for inpainting, as it can be effective in preserving the overall details while making specific modifications. Remember to adjust your prompt accordingly to focus on the areas you want to edit. It's important to experiment and iterate with different prompts, models, and parameters to achieve the desired outcome.
@Aiuniqreate
@Aiuniqreate Жыл бұрын
@@AIKnowledge2Go Thank you so much for your response . I totally agree , detailed prompt + config details are the key here, which I haven't mastered yet, still learning 🙂. Btw, if you don't mind me asking I would like to know if there is a way I can have a model trained to simply take input and change hairstyle on the same pic as output. Considering I have to work on multiple images, it would be difficult to keep writing prompt on each image to get the desired result. Thank you so much for looking into this 🙂
@amronjayden
@amronjayden Жыл бұрын
ich habe das Problem, dass sich bei mir Bilder trotz maskiertem Bereich verändern. Natürlich achte ich auf CFG-Scale und Denoising strength-Einstellungen. Und auch auf das "clear", wenn ich ein neues Bild rein lade oder den Tab wechsle, etc. Trotzdem kämpfe ich mit Änderungen im nicht maskierten Bereich. Auch dafür habe ich die richtigen Häkchen gesetzt, so wie in dieser oder in anderen Beschreibungen. Was kann man da verkehrt machen?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Hallo ich versuche mal nach bestem Wissen aus der Ferne zu helfen... Mit "Auch dafür habe ich die richtigen Häkchen gesetzt" ist gemeint das die Box "only masked" gesetzt ist, korrekt? Welche Version ist auf dem PC installiert? Evtl. hilft ein Upgrade. Die aktuellste version von A1111 ist 1.6. Ansonsten hilft vielleicht eine frische installation. Hoffe, das hilft!
@amronjayden
@amronjayden Жыл бұрын
@@AIKnowledge2Go vielen Dank für die Rückmeldung. Nächste Woche kommt der neue PC, da wird dann die Neuinstallation notwendig sein. Meine Onboard Grafik mit 4 GB ist einfach zu mickrig. In der Eingabeaufforderung wird auch beim Laden direkt etwas mit "fatal git" gemeldet. Da muss ich auch nochmal nach einem anderen Installer schauen. Den, den ich habe, der fragt mich gefühlt 1000 Dinge ab, da komme ich mir vor wie bei einer multiple choice Programmierer-Abschlussprüfung. Und was ich letztlich gemacht habe, wissen wohl nur Gott und Teufel... 😅
@olejorgensen1964
@olejorgensen1964 Жыл бұрын
Thanks for the video - really great. I just wanted to ask about HiResFix - I know it has its problems - but the composition, when controlled with a low Denoising strength, works ok for me - am i missing something here ?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your comment and insightful question. If HiResFix is working for you with a low denoising strength and you're happy with the compositions it's producing, then there's no issue at all. My experiences and preferences are subjective and not definitive. Personally, I've found DPM ++ 2M Karras to yield amazing results, but it can produce some peculiar compositions when multiple loras are used. This is merely based on my experiences though, and may not be the case for everyone. I've found Euler A to be more consistent for my needs. Sending the image to Image2Image and adjusting the sampler, while maintaining a low denoising strength, allows me to retain the composition. However, when I change the sampler in text2Image and activate HiResFix, it tends to generate a wholly different image. Remember, there's no one-size-fits-all approach in AI, so if you've found a method that works for you, I encourage you to stick with it! I hope this answers your question, and feel free to ask more if anything is unclear.
@olejorgensen1964
@olejorgensen1964 Жыл бұрын
@@AIKnowledge2Go Hi - Thanks for the reply. :-)
@allowedsounds8788
@allowedsounds8788 Жыл бұрын
Just one question, so in Automatic1111, does the refiner model doesn't work for image upscales?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
When you refering to refiner you mean SD XL Models right? Actually i havent done upscaling with it. My system crashes when i go higher than 1024 x 1024 with sdxl models. I still use a lot SD 1.5 because as of now in my opinion you can get better results with 1.5 if you know what you are doing.
@Thomasalbertini
@Thomasalbertini 11 ай бұрын
When I upscale the image giving it 0.5 noise I get a worse image. It just ruins the image with a lot of random variations instead than a coherent image. Any idea why?
@AIKnowledge2Go
@AIKnowledge2Go 11 ай бұрын
Have you tried creating the same image with a different checkpoint? Try lowering the denoising strengh.
@Thomasalbertini
@Thomasalbertini 11 ай бұрын
thank you! that was it. With another checkpoint it worked just fine. Weird how different checkpoints work so differently@@AIKnowledge2Go
@Misaplay
@Misaplay Жыл бұрын
hello , what is the lora cyber on prompt ?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Hello, thank you for pointing out the missing information about 'lora cyber'. I apologize for the oversight. I've now updated the description to include it. If you're unsure about how to use loras, I recommend checking out my video on using lorases and models: kzbin.info/www/bejne/f3a8fol_l7WVh7M. I hope you find it helpful!
@DicerX
@DicerX Жыл бұрын
I know this might get lost here, but, thank you, you really made my day, been a bit sad lately with life and everything and I just wanted to learn something new to keep myself occupied. With something. Just to not think about it all and produce art. You made that all possible. So thank you and I hope that you know that you made someone's life better.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I'm deeply touched by your words, and I'm so grateful to know that my videos have been able to provide you with some comfort and a positive distraction during this time. Life can be challenging, but remember that it's okay to take time for yourself and do something that brings you joy. The beauty of creating art is that it allows us to express ourselves, to lose ourselves in the process, and to make sense of our experiences. Never hesitate to reach out if you have any questions or just want to share your creations. Thank you for being a part of this community, and please take care of yourself.
@joeyi27
@joeyi27 10 ай бұрын
Thank you so much!!! 💎 thiss is reaaaally helpful!
@AIKnowledge2Go
@AIKnowledge2Go 10 ай бұрын
Awesome! Happy to help you out with my content!
@marschantescorcio1778
@marschantescorcio1778 Жыл бұрын
You're getting a hand and a robot thing in your leg in painting because it's still part of the original prompt. It's important to trim out portions that aren't applicable to what you're inpainting.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
You're absolutely right. My skills in inpainting have improved since then. I'm considering re-recording this video in the future. Thanks for pointing that out, happy creating.
@francescozappala4453
@francescozappala4453 4 ай бұрын
Awesome video man!! Dank je
@AIKnowledge2Go
@AIKnowledge2Go 4 ай бұрын
Thanks for watching! I'm glad you enjoyed the video!
@KishoYamitori
@KishoYamitori 11 ай бұрын
What setup do you have? because it generates super fast friend
@AIKnowledge2Go
@AIKnowledge2Go 11 ай бұрын
Great question! It might seem super fast, but that's actually an illusion - I sped up the video for better engagement. The creation you saw was done with my old Nvidia 2080 Super. However, my current setup includes an Nvidia 4080 RTX and an Intel Core™ i9-13900K, which is indeed quite fast. To give you an idea, it can render 8 images at 512 x 768 resolution in about 35 to 40 seconds.
@Faceless1997zNipe
@Faceless1997zNipe 7 ай бұрын
Kann man mit automatic auch sdxl und sd 1.5 models benutzen?
@AIKnowledge2Go
@AIKnowledge2Go 7 ай бұрын
ja ohne Probleme, aber ich empfehle dir auf die neuste version 1.9.3 (derzeit) zu upgraden.
@Faceless1997zNipe
@Faceless1997zNipe 7 ай бұрын
@@AIKnowledge2Go danke dir, hatte bis jetzt nur fooocus aber denke mit automatic wird es vielfältiger
@valentinmihai815
@valentinmihai815 Жыл бұрын
Omg you`re a life saver!!! I have one issue tho and maybe more people would relate: Inpainting does not work for me, i tried giving it prompts, paining one small area, multiple, restarted the whole webui, just nothing seems to work. I can see the image being rendered nicely and it looks good but i get the same result in my folders. Does anyone know any fix?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for the kind words! Regarding your issue, I've experienced similar problems when using ControlNet for inpainting in the current version. If you're not using ControlNet, perhaps a fresh installation might help resolve the problem?
@valentinmihai815
@valentinmihai815 Жыл бұрын
@AIKnowledge2Go Hello! I've solved my issue, I don't have them right now but I've had to put some commands in the webui batch file and it works fine. I don't know why, maybe because I have an AMD computer
@wohlertfotografie
@wohlertfotografie Жыл бұрын
Danke dir für das hilfreiche Video, Chris! Gibt mir eine super Hilfe zum Start! Funktioniert der Workflow so grundsätzlich auch für fotorealistische Bilder (oder auch für anderes)? Natürlich dann mit anderem Modell und Parametern denke ich...
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Danke für das Feedback! Es freut mich sehr, dass dir das Video geholfen hat. Ja, ich verwende diesen Workflow für alle Arten von Bildern, sei es Anime oder fotorealistisch. Gelegentlich nutze ich den "after! detailer" speziell für Gesichter. Falls du damit noch nicht vertraut bist, habe ich ein Video dazu auf meinem Kanal.
@GryphonDes
@GryphonDes Жыл бұрын
Fantastic and quick video - nicely concise and it'll be very useful!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
i am Glad it was helpful to you!
@ysb321
@ysb321 Жыл бұрын
I am using Adetailer it's amazing !!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Absolutely! Adetailer has been a game-changer. If you haven't seen it yet, check out my video on it; I delve into its benefits and how it can save you a lot of time. Cheers!
@danaxo8097
@danaxo8097 Жыл бұрын
I know it doesn't really matter, but I absolutely love your accent. Very helpful video as well ofc!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I'm glad you enjoy my accent. I wasn't sure how native speakers would perceive it. Thanks for the feedback!
@ufenloh
@ufenloh Жыл бұрын
Könntest Du Deine Videos auch noch in deutsch rausbringen?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Danke für dein Interesse! Ich habe tatsächlich einen deutschen Kanal namens KIWissen2Go. Allerdings habe ich in letzter Zeit hauptsächlich auf meinem englischen Kanal gearbeitet, da er viel schneller gewachsen ist. Ich werde dieses und andere Videos auch noch auf deutsch bringen, ich kann nur leider kein Zeit Fenster dafür nennen da KZbin für mich nur ein Hobby ist und ich einen Vollzeitjob hauptberuflich habe. Vielen dank für dein Verständnis.
@ufenloh
@ufenloh Жыл бұрын
@@AIKnowledge2Go Wusst ich nicht! Verton sie doch einfach nochmal und lade sie hoch. Muss ja nicht witzig sein oder so. Oder Du nutzt ne KI Sprache ^^
@Yourghostysoul
@Yourghostysoul Жыл бұрын
When I use promt it always generate realistic human but I want 3d or anime style how can I get it?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Are you using the checkpoint I suggested from CivitAI? It's called Rev Animated: civitai.com/models/7371.
@Yourghostysoul
@Yourghostysoul Жыл бұрын
@@AIKnowledge2Go so if I downloaded and install it I can just put promt and it will generate 3d/anime art right(sorry I just pick up AI art) and also😅 how can I say take a human model and make it into 3D art??
@red_x_ani
@red_x_ani 11 ай бұрын
What about Seed??? Do we have to paste the same SEED in img2img from txt2img Pls...
@AIKnowledge2Go
@AIKnowledge2Go 10 ай бұрын
Great question! When you use the 'Send to Image 2 Image' feature, it automatically transfers the seed from txt2img to img2img. No need to paste it manually - it's all taken care of for you! 😊👍 Hope this helps, and happy creating!
@kenkt1990
@kenkt1990 Жыл бұрын
"Before watching your video clip, I kept trying to use 'Hires fix' foolishly, and the result is that the pictures look very, very bad. Thank you very much." and I'm a newbie
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I totally get that! The 'Hires fix' can be quite tempting for many, especially when you're just starting out. I've been there. Glad my video could steer you in a different direction. Happy creating.
@kenkt1990
@kenkt1990 Жыл бұрын
@@AIKnowledge2Go "My English is quite poor, I can only follow what I see. I have to watch your video clip a few times before I can do it, well, a bit slow but anyway I feel lucky that you shared your experience. Once again, thank you very much." Chat GPT Translate :D
@ganzvv3726
@ganzvv3726 Жыл бұрын
Thiss isss the besssst sstable diffusion vid🖤🙏🏾 ssthanku sir
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you so much for the kind words! I'm glad you found the video helpful. Stay tuned for more content!
@thesolitaryowl
@thesolitaryowl Жыл бұрын
I never considered sending an initial image I like to img2img. I'll have to try that. I normally copy and paste the seed of an image I like and tweak settings from there.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I found it a bit more intuitive compared to the high res fix workflow and in my experience, it often led to better results. But the beauty of these tools is the flexibility they offer. Definitely give it a try and see how it works for you!
@thesolitaryowl
@thesolitaryowl Жыл бұрын
@@AIKnowledge2Go I have been trying this method, and I am amazed how much better this technique is. I get so much more detailed and hi-res results. Thank you for sharing this.
@Luiscameraman
@Luiscameraman Жыл бұрын
Great tutorial. thanks!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Glad you enjoyed it!
@Kekton
@Kekton Жыл бұрын
Danke Mann :D das Tutorial hat mir echt weiter geholfen, mein Abo hast du sicher!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Das freut mich zu hören! Ich bin froh, dass das Tutorial dir helfen konnte. Vielen Dank für dein Abo, ich schätze deine Unterstützung sehr!
@fireleafe1
@fireleafe1 Жыл бұрын
Klasse Tutorials! Durch deine Videos sind meine Ergebnisse um ein vielfaches besser geworden! Ich habe nur ein Problem mit den "Inpaintings". Ich habe es Schritt für Schritt so gemacht wie du in deinem Video, aber bei mir verändert sich so gut wie nichts. Egal welche Einstellung ich verändere, selbst wenn ich die "Denoising strength" komplett hochstelle oder den "Seed" auf -1, bekomme ich nach dem Generieren 4x nahezu das selbe Ergebnis. Hast du zufällig ne Idee woran das liegen könnte?
@fireleafe1
@fireleafe1 Жыл бұрын
Hat sich erledigt! Durch ergänzen der COMMANDLINE mit " --no-half-vae --no-half" und deaktivieren des Adblockers meines Browsers, funktioniert es nun
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Das ist super zu hören! Manchmal sind es diese kleinen Dinge, die den Unterschied machen. Viel Spaß beim Erstellen deiner Projekte! 😊
@Zelinity
@Zelinity Жыл бұрын
i see you have ADetailer! installed, but at which stage would you use it in this workflow? img2img or middle section?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I would use it while inpainting. While this image has only one person, sometimes you want to create setups with multiple people. And masking out a face and writing the prompt for the face in After Detailer, or using a Lora/TI/Lycoris, can be helpful. I hope that helps you on your creative journey.
@beantacoai
@beantacoai Жыл бұрын
Thank you for making such awesome videos! Love the German accent. Just liked and subscribed.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your kind words and support! I'm glad you're enjoying the content and my German accent! Stay tuned for more exciting videos.
@MarylandDevin
@MarylandDevin Жыл бұрын
Looks fun! I can't wait to start messing around with this. I just got a new GPU so I could.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
That's great to hear! Getting a new GPU can make a huge difference. Happy creating!
@MarylandDevin
@MarylandDevin Жыл бұрын
@@AIKnowledge2Go thanks it's only a 3060 but 12 gb. My old was a 760 and it would not even try
@midnattsol6207
@midnattsol6207 Жыл бұрын
I had to chuckle when you said she'd need another leg and immediately masked her face :)
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I'm glad my antics gave you a chuckle! I admit, sometimes my attention wand... - "Oh, look a bird!"😂 Jokes aside, there was some re-recording involved in creating the video. When it came to editing, I figured it would be easier to add some explanatory text rather than reshoot everything again. Thanks for noticing and watching!
@toddwiseman1421
@toddwiseman1421 Жыл бұрын
Took hours to generate the first set of 8 images... My graphics card is "MSI Gaming GeForce RTX 3070 LHR 8GB GDRR6 256-Bit HDMI/DP Nvlink Torx Fan 4 RGB Ampere Architecture OC Graphics Card (RTX 3070 Gaming Z Trio 8G LHR)" Is this normal/expected? Every other setting I think I got to match yours in 1111. Thanks for the great video!
@toddwiseman1421
@toddwiseman1421 Жыл бұрын
Oh I actually had the wrong model loaded. It went faster once I got the right one in, about 10 minutes. Thanks again!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
10 Minutes still is very long with an RTX 3070. I needed about 1,5 Minutes for 8 Images with my old 2080 Super. Do you have Xformers installed? In your stable diffusion web ui folder find the webui-user.bat file. Open it with a texteditor like notepad. Add --xformers if your "set COMMANDLINE_ARGS=" does not have it. so should look like this: set COMMANDLINE_ARGS= --xformers or similar, also between set COMMANDLINE_ARGS= and call webui.bat write in a new line "git pull".This keeps automatic 1111 up to date. For Xformers to install sometimes takes 3 - 4 restarts of the automatic 1111 server.It's Strange. After you saved the file you have to start A1111 via the webui-user.bat instead of the webui.bat file. Hope that helps.
@toddwiseman1421
@toddwiseman1421 Жыл бұрын
@@AIKnowledge2Go I did what you said to install the xformers and I do think it's working a bit faster now, thank you again!
@reficulgr
@reficulgr Жыл бұрын
As a non-german mathematician I find it very confusing to hear germans say "Yoo-ler" instead of "Oy-ler" hahaha. Great video!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Yeah, I took a guess on how to pronounce that. 😂 Looks like I guessed wrong. Thanks for the feedback and for watching!
@darkskyx
@darkskyx Жыл бұрын
I'm still having issues with hands even with this method, do you have any tips? Anything would work for me, like using an specific model. Context: I do semi-realistic and anime images. Edit: I managed to do it with another tutorial.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Ah, hands can indeed be tricky. When working with semi-realistic and anime images, consider using LORAs like badhands, badartis, or Unspeakable Horrors from Civitai. They can specifically help refine those details. Additionally, give ControlNet inpainting a shot, it's a great tool for correcting specific areas like hands. I use inpaint_only + llama often.
@darkskyx
@darkskyx Жыл бұрын
@@AIKnowledge2Go I tried before badhandv4 as a embedding but it is not working most of the time, I've been using inpaint too now that I know but I still have some issues. I guess it won't always work.
@darkskyx
@darkskyx Жыл бұрын
Hi again, I have some news. I watched another tutorial and I managed to get it to work as I wanted. Hands are still hard but most of the time it works enough for me, hands are usually fine and with some edition I can make it look perfect. Basically there are 2 important things: 1- Use a model that is good enough with hand generation. 2- Use the extension ADetailer. Optional: Use ControlNet too to let it know to the AI the position of the fingers. 3- Optional: Embeddings algo help and you can also use this "(badhandv4:1.5)" so you can force it to work harder.
@heilong79
@heilong79 11 ай бұрын
Great job, the angle on the phone thing she is holding is a little off but overall your workflow and tips are appreciated.
@AIKnowledge2Go
@AIKnowledge2Go 11 ай бұрын
Thank you for pointing that out! You're absolutely right there is still room for improvement. Happy creating
@grobknoblin5402
@grobknoblin5402 Жыл бұрын
how did you get that cool cursor?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for noticing! I use a custom Starcraft 2 mouse cursor that I absolutely love. You can get it yourself from this link: vsthemes.org/en/cursors/43974-startcraft-2.html. Enjoy your new cursor!
@Mo-cl8wb
@Mo-cl8wb Жыл бұрын
Getting the User interface settings doesnt work for me. When i reload or Restart the UI they always vanish.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I'm sorry to hear you're having trouble with the User Interface settings. Could you please provide a bit more detail about the specific issue you're facing? Have you tried opening webui in another browser? firefox, chrome edge?
@Numb_
@Numb_ Жыл бұрын
how is restore face not changing the style of your images?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I see what you mean. It can be a bit tricky to understand. Depending on the Model and Loras you are using, the 'Restore Faces' option might work differently. I often prefer to use the 'inpaint faces' option or the After Detailer plugin. If you want a more in-depth explanation, you can check out my video on using the After Detailer plugin kzbin.info/www/bejne/r2SnqYtvqJWBnrM
@rlmangili
@rlmangili Жыл бұрын
great tutorial! how do I change my stable diffusion to this dark theme?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you, I'm glad you enjoyed the tutorial! To change to a dark theme, there are a few options you can try: Add /?__theme=dark at the end of your browser's URL when you're on the Automatic 1111/Stable Diffusion page. Try the Dark Reader plugin for your browser. Open your webui-user.bat file and add set COMMANDLINE_ARGS=--theme dark. Please remember to make a copy of your webui-user.bat before making any changes! Hope one of these solutions works for you!
@rlmangili
@rlmangili Жыл бұрын
@@AIKnowledge2Go Thank you so much... adding this line to the bat file worket!
@Dragon211
@Dragon211 Жыл бұрын
very quick and detailed tutorial, you earned a sub!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you so much! Welcome to the community! 🚀
@ATLJB86
@ATLJB86 Жыл бұрын
What sampler is faster than DPM++ 2M Karras but good?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Good is indeed a subjective term when it comes to AI-generated art. It ultimately depends on what you're looking for in terms of balance between speed and output quality. For my workflow, I prefer to start with Euler A during the prompt engineering phase due to the specific characteristics it imparts to the generated images. Then, when it comes to upscaling, I switch to DPM++ 2M Karras due to its high level of quality in the results.
@ATLJB86
@ATLJB86 Жыл бұрын
@@AIKnowledge2Go My thought process comes from the difference between SDE Karras and 2M Karras. SDE gives better quality but 2M has really good quality at half the speed. So I was wondering if I could go one more step down under 2M but I don’t think it’s possible. Somebody told me DDIM and yes it is faster than 2M by 1-2 seconds but that’s not a major difference and you sacrifice quality.
@soultek4979
@soultek4979 Жыл бұрын
Awesome video excited to see your others and future videos
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your enthusiasm! It's great to know you found the video helpful. I'm excited to continue creating more content for you and the community. Stay tuned for more!
@michalkrasnodebski8709
@michalkrasnodebski8709 Жыл бұрын
I just got in to stable diffusion automatic 1111, installed it alongside one of the lates toturial I found on youtube but my Ui looks lsightly diffrent for example I dont have "restore face" :/ why? can anyone help me do I need to download some additional addon or what ?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Hey there! It seems like the "restore face" feature was relocated in version A1111 1.6. To get it back: Navigate to Settings -> User Interface. Under the Quicksettings list, choose face_restoration. Click on Apply Settings and then Reload UI. You should now see "Restore Faces" at the top of the page. If you'd like it to be a default feature: Go to Settings -> Restore Faces and enable it. Do note, if you've added it via Quicksettings, the description "Restore faces (will use a third-party model on generation result to reconstruct faces)" might not appear. Alternatively, consider using After Detailer. Here is a video about it: kzbin.info/www/bejne/r2SnqYtvqJWBnrM Hope this clears things up for you!
@michalkrasnodebski8709
@michalkrasnodebski8709 Жыл бұрын
holy crap thank you@@AIKnowledge2Go
@jegajega3
@jegajega3 Жыл бұрын
nice video bro, subscribed. One question, if i just need to get rid of something and i want that part to blend in with the background what should i do? how i copy patterns, for example wood tiles in from parquet?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you so much for your kind words and your subscription. For your question, if you want to remove something and have it blend with the background, you can use the 'Inpainting' option in the Automatic1111 workflow. You will have to change the 'Masked Content' setting to 'Latent Noise' and also modify your prompt accordingly. Please make sure to set the 'Denoising Strength' to 1, otherwise, you might end up with a pixelated result. As for copying patterns, I would recommend using the ControlNet tool. While I don't have a specific tutorial for your exact case, I do have two very comprehensive tutorials on how to use ControlNet on my KZbin channel. Here is one that might be helpful to you: Basic:kzbin.info/www/bejne/mpi1paeVrtlmr6M Using two contorlnet models at once: kzbin.info/www/bejne/iIqYiaR8aJ13fMk
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Do you struggle with prompting? 🌟 Download a sneak peak of my prompt guide 🌟 No Membership needed: ⬇ Head over to my Patreon to grab your free copy now! ⬇ www.patreon.com/posts/sneak-peek-alert-90799508?Link&
@micbab-vg2mu
@micbab-vg2mu Жыл бұрын
thank you for great tips.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
You're welcome! I'm glad you found the tips helpful. Happy experimenting!
@ddra9446
@ddra9446 Жыл бұрын
Thanks for the tutorial.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
You're welcome! I'm glad you found the tutorial helpful. Stay tuned for more.
@ddra9446
@ddra9446 Жыл бұрын
@@AIKnowledge2Go yes
@SwampySi
@SwampySi Жыл бұрын
Nice video.. 1 question, when I try pinpointing do you need to change the prompt at all? I keep getting strange results, for example, I inpatient the face, but then I get a whole corrupt face, sometimes with bodies and all sorts of things in it. Any tips?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your feedback. Regarding pinpointing, adjusting the prompt can often help in achieving the desired results. It sounds like the issue might be related to the denoising strength; consider reducing it. If you're having specific problems with faces, I recommend checking out my video on 'After Detailer'. This tool can realy make inpainting face effortless
@remusveritas739
@remusveritas739 Жыл бұрын
i would prefer for the whole screen to be visible that would make the videos much more efficient even to learn from
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for the feedback. I always aim to provide the most clarity in my tutorials. I'll keep your suggestion in mind for future videos. Happy creating.
@Jeff_Makit
@Jeff_Makit Жыл бұрын
Hello, why I hanvn't the same result at all, with your prompt ? I have a cowboy on earth
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Hello, that's intriguing! It sounds like there might be something different in your setup. Could you double-check if you're using the same checkpoint I'm using? Additionally, it might be worth checking the console window to see if there are any error messages or unexpected outputs. Also, make sure your LoRa (Long Range Wide Area Network) settings are working correctly. These factors can often impact the kind of results you get. Let me know how it goes!
@Jeff_Makit
@Jeff_Makit Жыл бұрын
@@AIKnowledge2Go Yes, sorry, I've learned a lot since then and I understand better now. I certainly didn't have everything to install. Thanks for your answer
@Xioteer
@Xioteer Жыл бұрын
It's like listening to myself with my German Accent. Top Video Bruder
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Hi und danke für dein Feedback! Den Akzent bekomme ich einfach nicht raus 😂. Zum Glück scheinen ihn die meisten englischsprachigen Zuschauer eher amüsant als nervig zu finden.
@rushc415
@rushc415 Жыл бұрын
a good job! I like it!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you so much! I'm glad you enjoyed it. Your support means a lot! 😊
@melvinhoyk
@melvinhoyk Жыл бұрын
Great workflow as I had abandoned SD and came back after your video. I wanted to ask about two types of prompts you used, and . Whenever I generate and my terminal shows Could not find two Lora files. Where do you download from and how do you install it?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thanks for your comment. These two are Loras, extensions that can be wielded on a variety of models. Links are below in my video description. If you're not sure about how to use Loras, I suggest you watch my video on this topic. It'll give you a detailed guide on how to download, install, and use them. Here's the link: kzbin.info/www/bejne/f3a8fol_l7WVh7M
@Shabazza84
@Shabazza84 Жыл бұрын
Look at the actual file name. I guess he renamed it from what the file name on Civitai is. Use the actual file name without file type.
@Shabazza84
@Shabazza84 Жыл бұрын
My 1080 can't upscale so much. Welp. But nice demonstration. That model is right up my alley. I hope the dev will resume maintaining it after his hiatus.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I understand your concerns with the 1080. To address the VRAM issue, you might want to try adding the `--lowVram` or `--medVram` parameters when starting with webui-user.batch. Another approach for upscaling with limited VRAM is to utilize Controlnet Tile. It essentially breaks your image into smaller tiles and scales each one separately. This can be especially helpful for hardware with memory limitations. Hope that helps
@SpacenSpooks
@SpacenSpooks Жыл бұрын
Where do I find the Controlnet Tile? I'm not ready to follow your tutorial yet cos my internet is being throttled and that file for the revAnimated model you suggested is taking hours to download :/@@AIKnowledge2Go
@AnotherPlace
@AnotherPlace Жыл бұрын
What do you guys do with the pcitures generated? If you sell it, then where?
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your question. Personally, I enjoy exploring my artistic side with these tools. In fact, before the rise of ChatGPT and other AI models, I hadn't even considered myself to be an artist. While it might be tempting to sell the images generated by these models, it's important to note that most of them specifically prohibit selling the content or offering creation services, such as commissions. Always make sure to check and respect the terms of use for the models and tools you're using.
@pirobot668beta
@pirobot668beta Жыл бұрын
Thanks! This has re-newed my interest in virtual worlds-building!
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
I'm thrilled to hear that this has reignited your interest in building virtual worlds! It's such a fascinating field with so much potential. I'm glad my content could be part of your renewed journey. Thanks for sharing your experience and happy world-building!
@pirobot668beta
@pirobot668beta Жыл бұрын
I've been tinkering with BlenderAI, Stable-Diffusion that works inside of the Blender modelling system. Blender renders a source image which is passed into Diffusion (along with animated prompts!) The Blender scenes (3d animations) are a great starting point, but I can see that using this work-flow is going to greatly improve my results. My current project is to 'convert' a short video clip into a nightmarish vision using BlenderAI to re-work frames of video. Being able to play with 'sliders' on a frame-by-frame basis is pretty wild!
@eiermann1952
@eiermann1952 Жыл бұрын
german coastguard speaking 😍 great tutorial, will follow for more
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Danke sehr! 😊 I'll do my best to keep up the quality. Stay tuned for more tutorials!
@MrPicklesAndTea
@MrPicklesAndTea Жыл бұрын
This is more or less my workflow, only that I prefer DDIM over Euler a, and DPM++ 2S a Karras over DPM++ 2M Karras. I didn't know when I was first experimenting, but the two that I picked add noise between steps, which further randomizes and creates variation. Most other samplers(ones that aren't DDIM or have 'a' in the name), end up converging at about 150 steps to the same result because they don't add noise between steps.
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Interesting! I've noticed similar patterns with different samplers. The noise addition in DDIM and those with 'a' really makes a difference in creating unique variations. I'll definitely consider experimenting more with DPM++ 2Sa for refinement. Thanks for sharing your insights!
@ballaswave
@ballaswave Жыл бұрын
insane dude, thank you very much
@AIKnowledge2Go
@AIKnowledge2Go Жыл бұрын
Thank you for your comment! I'm glad you found the content helpful and enjoyed it.
This new Outpainting Technique is INSANE - ControlNet 1.1.
5:10
AIKnowledge2Go
Рет қаралды 24 М.
The Ultimate Guide to A1111 Stable Diffusion Techniques
11:19
AIKnowledge2Go
Рет қаралды 48 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
REAL or FAKE? #beatbox #tiktok
01:03
BeatboxJCOP
Рет қаралды 18 МЛН
Cheerleader Transformation That Left Everyone Speechless! #shorts
00:27
Fabiosa Best Lifehacks
Рет қаралды 16 МЛН
Stable Diffusion Prompt Guide
11:23
pixaroma
Рет қаралды 46 М.
ControlNet Upscaling in ComfyUI Properly with Ultimate SDUpscale
17:41
How to use Stable Diffusion. Automatic1111 Tutorial
27:10
Sebastian Kamph
Рет қаралды 352 М.
Create consistent characters with Stable diffusion!!
26:41
Not4Talent
Рет қаралды 224 М.
How to Install AUTOMATIC1111 + SDXL1.0 - Easy and Fast!
10:23
Incite AI
Рет қаралды 28 М.
Intro to LoRA Models: What, Where, and How with Stable Diffusion
21:01
Laura Carnevali
Рет қаралды 211 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН