The Ultimate Guide to A1111 Stable Diffusion Techniques

  Рет қаралды 48,348

AIKnowledge2Go

AIKnowledge2Go

Күн бұрын

Пікірлер: 158
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Make sure to visit my sponsor. Their Textify tool is revolutionary: storia.ai? Write an email to founders@storia.ai for a 10% discount on your existing subscription for 6 months.
@TheNexusDragoon
@TheNexusDragoon 9 ай бұрын
the prompt that made those pics is different to the one you entered this is why you alway add all prompts in the description how is anyone going to be sure they get it all right before doing there own.
@TheNexusDragoon
@TheNexusDragoon 9 ай бұрын
what is in the professional scenic photography style???
@TheNexusDragoon
@TheNexusDragoon 9 ай бұрын
long shot, professional scenic photography, closeup image of a female druid, in leather armour, sitting on rock, casting nature spellfantast00d,smiling, perfect viewpoint, highly detailed, wide-angle lens, hyper realistic, with dramatic sky, polarizing filter, natural lighting, vivid colours, everything in sharp focus, HDR, UHD, 64k, Negative prompt: nsfw, (worst quality, low quality,2D:2), monochrome, zombie, overexposure, watermark, text, bad anatomy, bad hand, extra hands, extra fingers, too many fingers, fused fingers, bad arm, distorted arm, extra arms, fused arms, extra legs, missing leg, disembodied leg, extra nipples, detached arm, liquid hand, inverted hand, disembodied limb, small breasts, oversized head, extra body, extra duplicate, ugly, huge eyes, text, logo, worst face, (bad and mutated hands:1.3), (blurry:2.0), horror, geometry, bad prompt, (badhands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), ((2girl)), (deformed fingers:1.2), (long fingers:1.2),(bad-artist-anime), bad-artist, bad hand, extra legs , canvas frame, (high contrast:1.2), (oversaturated:1.2), (glossy:l.l), cartoon, 3d, ((disfigured)), ((bad art)), ((b&w)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, 3d render, badhandv4, bad-hands-5 Steps: 35, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1598295495, Size: 768x768, Model hash: edd7bf7340, Model: realcartoonRealistic_v14, Style Selector Enabled: True, Style Selector Randomize: False, Style Selector Style: base, Lora hashes: "fantasy00d-000015: 56defd48c8b6, add_detail: 7c6bad76eb54", Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
​@@TheNexusDragoon Here you go: SD 1.5 & SDXL Styles: www.patreon.com/posts/my-collection-of-87325880
@TheNexusDragoon
@TheNexusDragoon 9 ай бұрын
@@AIKnowledge2Go sweet i always try to recreate the image being made to make sure its all going right thanks
@SteveWarner
@SteveWarner 9 ай бұрын
This is one crazy workflow that produces incredible results. I've watched it 3 times now and compiled over a page of notes. Tons of great information here. I've seen videos on upscaling, control net inpaint, control net tile, etc. but have never seen them put together into a cohesive workflow like this. Thanks for sharing. Really exceptional info. Liked and subscribed!
@SanjogBora
@SanjogBora 9 ай бұрын
can you please share the notes!
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
I’m so glad to hear you found the workflow helpful and took the time to dive deep into it-over a page of notes, that’s impressive! You're right; many tend to focus on single techniques, but I believe in the power of combining them to unlock even greater potential. Your support, by liking and subscribing, means a lot to me, and it motivates me to keep sharing more exceptional info.
@SteveWarner
@SteveWarner 9 ай бұрын
@@AIKnowledge2Go If I could make one suggestion for this process, it would be to skip the Ultimate SD Upscale (step 5) and instead use the Tiled Diffusion extension. Ultimate SD Upscale produces gorgeous results, but it also produces noticeable seams and the details between the tiles can change wildly. So while the overall image looks remarkable at first glance, the end result falls apart on close inspection. Conversely, the Tiled Diffusion extension allows you to upscale your results without any noticeable seams, and the overall image will be coherent. The settings for this in I2I are pretty straightforward. Enable Tiled Diffusion and Keep Input Image Size. Leave Method to MultiDiffusion. Keep Tile Width/Height at their default (96) and Tile overlap to its default (48). Set the Upscaler to 4x-UltraSharp and set the Scale factor to 2 to 4. (You can go up to 16k). Keep Noise Inversion OFF. Turn on Tiled VAE and leave everything at its default. Turn on ControlNet Tile using the same settings you would for Ultimage Upscale. Set your CFG Scale somewhere between 5 and 7 (any higher and your image will look over baked). Set Denoise Strength to between 0.3 and 0.45. Anything outside this range will produce garbage results. Remove your prompt and simply put "8k, ultra sharp" (you can also use the Add Detail Lora with a low strength if you need more details.) Render and you'll get a gorgeous high-res image. The main thing to note with Tiled Diffusion is it tries to adhere to your original image as closely as possible. So work out the details in your low-res image. Then let Tiled Diffusion add the fine details as it scales things up.
@DeeTenF
@DeeTenF 9 ай бұрын
@@SteveWarner awesome tip, actually just came back to this video for a refresh on upscale and was going through the comments, going to try both methods and see what works better. have a few images to test on. That's why I love this stuff, there's so many ways to do things, endless combinations and finding what works best for a given scenario.
@Counterfactual52
@Counterfactual52 8 ай бұрын
​@@SteveWarnerGreat tip! Thank you for commenting :)
@juggz143
@juggz143 9 ай бұрын
In the step starting around 7:10 did you mistakenly invert the settings for denoise and controlnet weight. Because if I put denoise to 0.9 or 1 and weight at .3-.6 I get a completely new img, but if I reverse them I get the intended effect.
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Hi there! Actually, no, the settings of denoising at 0.9 and control weight at 0.3 - 0.6 are indeed correct. Could you please check your A1111 Console Window for any errors that might pop up when you use this controlNet workflow and render? Sometimes, the issue might be hiding in the details there.
@juggz143
@juggz143 9 ай бұрын
@@AIKnowledge2Go I wasn't getting any errors, it ended up being using the i2i>inpaint tab instead of just the i2i tab like the other commentor mentioned (who seems to have deleted their comment for some reason #shrugs lol). But like I said even in the main i2i tab I was able to get the desired result with those options flipped. Thx!
@Voxxx69
@Voxxx69 9 ай бұрын
@@AIKnowledge2Go Will using A1111 Forge webui make a difference. I get similar results as @juggz143 when using denoising at 0.9 and control weight at 0.3 - 0.6. (ie: my image changes completely
@ernstaugust6428
@ernstaugust6428 8 ай бұрын
@@juggz143 Same here. I don't think it could work with 0.9 denoising strength, especially when using a random seed...
@neosmith166
@neosmith166 5 ай бұрын
I got the same results as you. If i flip it i get the desired results otherwise it gives me something else.
@AScalfTN
@AScalfTN 9 ай бұрын
I just came across your channel for the first time. I’ve yet to see some of these tips! Turning off restore faces on the upscale should be a big help for me. That has always been my issues is the warped faces. Thanks!
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
I'm thrilled to hear you found the video helpful and insightful! Welcome to our community!
@DeeTenF
@DeeTenF 9 ай бұрын
Ive found that in general, at least the way i usually work, its best to turn off restore faces after either text 2 img, or after img 2 img (if you want to make significant changes to the image while keeping same face). When I do any kind of inpainting on face, my preferred method is to use ip adapter (heard good things about ID as well, but haven't tested it beyond a few images). Ip adapter seems pretty good if I want to add or remove some kind of detail to a face. Keeps the face the same without removing or heavily denoising my changes, face restore on the other hand does not seem to place nicely with inpainting on face.
@JustM1118
@JustM1118 9 ай бұрын
I fell in love with digital art creation quite by accident, just 1 week ago. and have been going at it alone, through trial and error. Finding your tutorial as a recommended video, I was excited that it was so clear and concise that even I could follow it. Liked and subscribed and looking forward to this incredible adventure, now that I've found your channel!
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Thanks a lot for saying that. I am glad you found my content helpful. Happy creating.
@phenix5609
@phenix5609 9 ай бұрын
haha i'm on the same boat as you just started few day ago, and i'm going mad about it, all the possibility, everytime i learn something new about it, i discover even greater thing about this tech, lot of error or bug at first but i'm getting ready to it, learn new way to do thing, haha i feel silly being amaze already by some picture i made with only a checkpoint, with little to no upcale, vae or even lora ..., just checkpoint and positive negative prompt, as i discover new way to make thing even greater, just learned about upscale with high res at first, then went upscale by img2img, tried rapidly ultimare sd upscaler without control net and discover control net yesterday, and lora not long ago aswell, the thing become even crazier at each step and this now, thx for the quality video and keep it up @AIKnowledge2Go
@Al-Storm
@Al-Storm 9 ай бұрын
Thanks for the video. Used your flow, and everything looks good until inpaint upscale, then it goes off the rails. I have the same exact settings, can't figure out what's going on. I can only get a normal looking image if I lower the denoising.
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
You are welcome. Make sure that your controlNet unit works correctly. Check your console Window for errors when you start the upscale. If it works correctly then, increase ControlNet weight and lower denoising.
@Al-Storm
@Al-Storm 9 ай бұрын
@@AIKnowledge2Go Thanks! Ya, tried all that. It only works if I drop denoising to ~0.2, and then I'm losing detail. Not sure how you get it to work with denoising so high, I've been trying everything. I'm in forge, so maybe that's it? Although, it should be 100% auto111.
@Darkwing8707
@Darkwing8707 9 ай бұрын
@@Al-StormI've got the same problem with Forge
@randomM1ND89
@randomM1ND89 9 ай бұрын
Awesome man! Your videos are what got me into AI art, so happy to be learning a new workflow. Can't wait to try it!
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
That's fantastic to hear! I'm glad my videos have inspired you to dive into AI art. Enjoy exploring the new workflow!
@randomM1ND89
@randomM1ND89 9 ай бұрын
So when I get to the stage to upscale by the "resize by" before we do the Ultimate upscale. I notice that if my prompt has a color word in it, regardless of if it's like "purple hair" the whole image will then get this purple tone to it, if I take out the color word, then I don't get the purple hair but image looks great. Would you recommend anything so the image doesn't get the color overlay at this step? And perhaps would you know what is driving this? @@AIKnowledge2Go
@foreropa
@foreropa 29 күн бұрын
Thank you SOOOO much for the guide in your Patreon, it has been enlightening, so many advices, really really appreciate it!!
@AIKnowledge2Go
@AIKnowledge2Go 22 күн бұрын
Glad you enjoy it!
@SHPjealousy
@SHPjealousy 5 ай бұрын
ein super workflow. habe ihn jetzt mehrere male durchgespielt, echt top die ergebnisse. aber, beim letzten upscaling schritt (mit script) verliere ich jedes mal das gesicht, welches ich mit ReActor reingebracht habe... wie kann ich das verhindern?
@AIKnowledge2Go
@AIKnowledge2Go 5 ай бұрын
Hi danke für das Feedback, du müsstest dann die Denoising strength reduzieren, das gibt natürlich weniger filigrane details. Wichtig ist auch das du "After detailer" und "Face restoration" deaktiviert hast. Du könntest auch versuchen mit einem IPAdapter zu arbeiten. Dazu arbeite ich derzeit an einem Tutorial. Kommt aber wohl erst als übernächstes Video leider. Ich weiß auch nicht wie gut Ipadapter mit Tile Upscale zusammen arbeitet.
@SHPjealousy
@SHPjealousy 5 ай бұрын
@@AIKnowledge2Go face restoration ist aus, werde mal mit weniger denoising versuchen. für ipadapter muss ich noch mal ein passendes model suchen. danke schon mal!
@AIKnowledge2Go
@AIKnowledge2Go 5 ай бұрын
@@SHPjealousy immer gern.
@r2dk123
@r2dk123 8 ай бұрын
at 6:15 you said you uploaded the image from step one but it looks like you used the inpainted version instead. is the inpainted version the correct one to upload to controlnet?
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
You are absolutely right, I mixed them up. But since it's out painting it shouldn't impact the outcome that much. Control net is analyzing the image so it looks more for colors and surroundings and how it should in paint the new areas.
@videowatcher551
@videowatcher551 5 ай бұрын
What's a good way to make the end result less blurry and more detailed?
@AIKnowledge2Go
@AIKnowledge2Go 5 ай бұрын
Unfortunately, there is no one shot answer as it strongly depends on Stable Diffusion version, checkpoint, lora usage settings etc. Sometimes it helps to increase denoising, but that can of course mess up the composition. On my Patreon you can find my FREE workflow guide. Maybe it can shed some light in the darkness. 100 % Free, no membership needed! www.patreon.com/posts/get-your-free-99183367
@videowatcher551
@videowatcher551 5 ай бұрын
​@@AIKnowledge2Go I found that upping the tile width in USDU to 1024 (Keeping resolution at 2) and putting the denoising at 2.5 to maintain consistency will output some really nice pictures with way less blurring. Occasionally i'll up it to 2048 is the picture is already really good for a really nice composition.
@Lukasz490
@Lukasz490 2 ай бұрын
One question to Checkpoint versions or versions in particular. You are using the Version 11 of the realcartoon checkpoint altough there is a version 17 on the screen. Does not a higher version mean it is newer or "better"?
@AIKnowledge2Go
@AIKnowledge2Go 2 ай бұрын
Hi, yeah usually newer is better in terms of checkpoints. Some Checkpoint creators pushing out new versions on a daily basis. When i came up with the idea for the video 11 was the latest and i don't change version while working on a project. This is why its still version 11.
@Lukasz490
@Lukasz490 2 ай бұрын
@@AIKnowledge2Go danke dir für die schnelle Antwort :D
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Do you struggle with prompting? 🌟 Download a sneak peak of my prompt guide 🌟 No Membership needed: ⬇ Head over to my Patreon to grab your free copy now! ⬇ www.patreon.com/posts/sneak-peek-alert-90799508?Link&
@BoringThings2069
@BoringThings2069 Ай бұрын
I only started using SD few weeks ago, and all videos on KZbin are from 1-year-ago, did everyone stop using A1111? Or nothing really changed since then?
@SgtBash-iz2rd
@SgtBash-iz2rd Ай бұрын
I'm wondering the same, really odd. Also seems to be mainly Forge now?
@AIKnowledge2Go
@AIKnowledge2Go Ай бұрын
Yes un(forge)unately... bad pun... unfortunately no one knows whats the timeline for new A1111 versions. It doesn't support Flux and i may be wrong but it also doesn't support Stable Diffusion 3.5 atm. These both are the go to models for local image generation. I still use both A1111 mainly because some extension do not work inside forge. If you want to try and don't want to mange multiple downloads / model folder. Here I show how you can install A1111, Forge, ComfyUI and many more with just simple clicks by using Stabillity Matrix. kzbin.info/www/bejne/mKiznGCEjcyappI
@rixyl7475
@rixyl7475 3 ай бұрын
Are you using restore faces? I can't get my faces to even look a fraction of that quality. Edit: Saw that you were and turned it off later HOWEVER, I'm now running into an issue where the hands are getting worse after the resize and the Inpaint doesnt support 2736x1536 (only shows 2048 as max). Also the face distorted just a tiny bit so I'd like to fix that as well if possible, any recommendations?
@rixyl7475
@rixyl7475 3 ай бұрын
@Bpmf-g3u thanks, I ended up spending the day repeat inpainting until I got it decent. Tried a few other things but none worked at all for some reason.
@AIKnowledge2Go
@AIKnowledge2Go 3 ай бұрын
When doing Tile Upscale its important to turn of restor faces, because its going tile by tile that's why you need to turn it off. Regarding your problem usually you want to fix all your inpainting at lower resolutions and then upscale, if anything in the image distorted then decrease the denoising strength.
@rixyl7475
@rixyl7475 3 ай бұрын
@@AIKnowledge2Go funny enough it looked fine before the upscale, perhaps I needed to change the denoising. I’ve noticed I added TOO much detail after the upscale and tiling. Would lowering or increasing the denoisong fix that? Sorry for all the questions I’ve never touched the upscaling before so this is all new to me. Appreciate the response!!
@TheAncientDemon
@TheAncientDemon 4 ай бұрын
This is great stuff! When you turn the image into wide screen, can you add more chracters into that space? If so, how?
@AIKnowledge2Go
@AIKnowledge2Go 4 ай бұрын
You would first have to outpaint the image (into wide screen) and then send it to inpaint and mask the area where the character you want to add should stand and use a high denoising strength, 0.7 - 0.9. Use ControlNet inpaint with either llama or global harmonious as preprocessor. Remove everything from the prompt that has nothing to do with render. "Phototrealistic, 4K, HDR" etc. has to stay in order to give you the same look. Working with Character loras here can be tricky. I have a guide on my patreon in my 3 Dollar tier, but here is a free version that you can get you started: www.patreon.com/posts/99183367 Hope that helps.
@TheAncientDemon
@TheAncientDemon 4 ай бұрын
@@AIKnowledge2Go I have another question. When putting the denoising strength to 90 for the image detail increase, I sometimes see changes happen to the image that I didn't want to change from the original image, but it's random. What is a good way to keep the image 100% the same, but still add the detail you did in this video?
@lastreligiondfw
@lastreligiondfw 20 күн бұрын
Your method of download although simple, you skip the steps on how you are supposed to enable these in a1111. Do they go in the extensions file? I did that after searching on the web, but there doesnt seem a way to enable control net unless I go install via the a1111 webui
@lastreligiondfw
@lastreligiondfw 20 күн бұрын
even then i cant be sure im doing/downloading the same things you did.
@AIKnowledge2Go
@AIKnowledge2Go 10 күн бұрын
Hi its been a while i created this video. I am not sure what you mean by my method of download. All links are provided in the video description. The Loras go into the lora Folder and the inpaint model into the ControlNet folder.
@thays182
@thays182 6 ай бұрын
Need to know how you'd do this in SDXL! There's no inpaint or tile controller model that I can find for SDXL:(
@AIKnowledge2Go
@AIKnowledge2Go 6 ай бұрын
I know how you feel. There are these models from different People: civitai.com/models/136070/controlnetxl-cnxl. Unfortunately ControlNet Models for SDXL are inferior compared to SD 1.5. That's why i still use SD 1.5 in certain scenarios.
@thays182
@thays182 6 ай бұрын
@@AIKnowledge2Go O wow! Ididn't know any existed for tile and inpain. Have you tried applying your inpainting controlnet wizardry with these models you've shared? Would be amazing to get something similar with the inpaint method working in sdxl.
@nctaylor44
@nctaylor44 4 ай бұрын
Control net was the cheat code I needed
@AIKnowledge2Go
@AIKnowledge2Go 4 ай бұрын
I'm glad to hear that Control Net is helping you out! It's amazing how a little tool can make such a big difference in your workflow.
@AI_ART_MASTER
@AI_ART_MASTER Күн бұрын
how do you have sampling method DPM++ 2M Karras. while i only have DPM++ 2M. are they the same or no?
@polystormstudio
@polystormstudio 9 ай бұрын
I've been using the Controlnet Tile/Ultimate SD Upscale (USDU ) workflow for months but I still get a lot of artifacts using Controlnet. My solution has been to just use USDU with noise set to 2, and always use DPM++ 2M SDE Exponential, even if the original was done with Euler A.
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Great workaround! I've also shifted from Euler A to experimenting with DPM++ 2M Karras and DPM++ SDE in Dreamshaper XL Lightning setups. It's fascinating to see how different settings impact the final output.
@artnu9300
@artnu9300 9 ай бұрын
@@AIKnowledge2Go have you tried JaggernaughtXL lightning< for me its best model, but i could make it work properly from you tutorial
@roxmay
@roxmay 8 ай бұрын
So much good ideas ! But now I am working with SDXL, and I am unable to find any inpaint controlnet module 😭😭. Do you have any solution ??
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
Not having a control net inpaint module is one of the biggest drawbacks in SDXL. I know your pain I heard of a software called Fooocus that is good at inpainting. It's from t lllyasviel who has done a lot of work for the original control net models. But I haven’t tried it myself. github.com/lllyasviel/Fooocus It's an area I will definitely look into in further videos but I can't promise you when.
@majortom4338
@majortom4338 2 ай бұрын
Hallöchen, darf ich fragen warum du 1111 nutzt und nicht zB Fooocus? Danke
@AIKnowledge2Go
@AIKnowledge2Go 2 ай бұрын
Du darfst... zum Zeitpunkt als ich das Video aufgenommen habe, war ich noch nicht so überzeugt von Fooocus. Tatsächlich ist eins meiner kommenden Projekte ein Inpaint tutorial wo ich Fooocus vorstellen werde
@anonymysable
@anonymysable 3 күн бұрын
Hey ! is it still up to date (in december of 2024), and working if I'm using reforge ? thanks ! :)
@AIKnowledge2Go
@AIKnowledge2Go 3 күн бұрын
Yes It is. the techniques in this are timeless. It works in Forge Ui. Haven't used reforge yet.
@anonymysable
@anonymysable 3 күн бұрын
​@@AIKnowledge2Go Thanks however, the step at 5:50 doesnt work for me : when I change the ratio from 768x768 to 1024*768, the image is stretched... I have these settings : controlnet, enable, upload independent control image, inpaint, inpaint_only+lama, thibaud_xl_openpose (because the controlnet models are not working for reforge), controlnet is more important, resize and fill, wit the control weight at 1 and the denoising strength at 9 edit : nevermind, found the issue ! it was because I forgot to switch back from "inpaint" to "img2img"
@sinuva
@sinuva 5 ай бұрын
for some reason, the mask area get more dark than the original pic, and i dont know how to solve.
@AIKnowledge2Go
@AIKnowledge2Go 5 ай бұрын
Strange, can you describe in more detail what you did and what model you are using. Maybe i can help?
@dawg0907
@dawg0907 8 ай бұрын
Could you drop your negative prompt? i would like to save it as a template
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
You can find my style collection here 100% free no membership needed: www.patreon.com/posts/my-collection-of-87325880 I also have a free sneak peek of my workflow and beginners guide if you are interested also 100% free: www.patreon.com/posts/get-your-free-99183367 www.patreon.com/posts/sneak-peek-alert-90799508 have fun with it happy creating
@x9v8k
@x9v8k 8 ай бұрын
The image stretches after generating with higher res and I have resize and fill select something else Im not doing right?
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
I'm sorry to hear that you have trouble with my workflow. I assume you are talking about the out painting Step? If so, can you please confirm that you are using resize by instead of resize to. Also can you please check your command window for any errors that are popping up while rendering?
@marskog
@marskog 5 ай бұрын
if i want to make AI pictures of a person. and make many image of the same person. how do i do that?
@AIKnowledge2Go
@AIKnowledge2Go 5 ай бұрын
Hi, i have a consistent Character Tutorial in the works but unfortunately it won't be my next video. Look online for an IP-Adapter Tutorial.
@alonius12
@alonius12 4 ай бұрын
​@@AIKnowledge2Golooking forward to that consistent character tutorial 🎉
@AIKnowledge2Go
@AIKnowledge2Go 4 ай бұрын
@@alonius12 the script is finished. Unfortunately i got side tracked by flux.1 But it will be one of my next videos, big promise.
@phenix5609
@phenix5609 9 ай бұрын
yea i have the same problem than someone i read in the comment when i got to the part of inpaint upscale my change entierly, i lost the scenary the character everything, your seem almost inchanged except for the detail added, i'm not sure what going on...., so i can only running really low denoising not sure how your don't change ... nothing weird show up in the terminal so i think it work correctly ...
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
That does sound strange indeed. Could you please confirm if you're using the latest versions of A1111 and ControlNet? Also, it would be helpful to know which Checkpoint you're currently using. I want to try out for myself.
@phenix5609
@phenix5609 9 ай бұрын
​@@AIKnowledge2Go i'm not sure you got my answer i wrote you some hour ago as it seem to have vanish from the commentary idk why, anyway, i think i'm up to date has it's has not been a week since i started using ai creating image tool, so from the system info extension i got those info, app: stable-diffusion-webui-forge updated: 2024-03-08 device: NVIDIA GeForce RTX 3080 (1) (sm_90) (8, 6) cuda: 12.1 cudnn: 8801 driver: 551.52 python: 3.10.11 xformers: 0.0.25 diffusers: 0.25.0 transformers: 4.30.2 configured: base:realcartoonRealistic_v14.safetensors [edd7bf7340] refiner: vae:kl-f8-anime2.ckpt loaded: base:C:\stable-diffusion-webui-forge\models\Stable-diffusion ealcartoonRealistic_v14.safetensors refiner: vae:C:\stable-diffusion-webui-forge\models\VAE\kl-f8-anime2.ckpt So with this you should have my A1111 version, some of my spec, and the model i used when i tried your tutorial. i was follow closely so i also use the lora you use the fantasy one and add detail one. I still managed to get it done from where i were stuck ( the inpaint upscale step ), but only after lot of time and trial and error of testing various value, for it to work, but in the end nothing i was happy with, then i let it go and went into an img2img ultimate upscaler solution, but taking again some time to try to find a value i was happy with. As i said i'm pretty new to this so i made maybe some mistake idk, or a a combination of factors made this not working as expected idk.
@markusblandus
@markusblandus 8 ай бұрын
Are you on ForgeUI? I am getting the same thing, I think you can crop the denoise to 50% and get similar results
@ShubikStyle
@ShubikStyle 8 ай бұрын
I've installed the latest version of stable diffusion from GitHub and I don't have Karras sampling at all. Can someone please help me?
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
Hi, this is now a new dropdown it's called "schedule type" you can leave it as automatic, but you can also set it by hand. I hope that helps happy creating.
@deosharma4229
@deosharma4229 6 ай бұрын
Great video. How can we make Animie Type Video? Could you bring out another video regarding this, please?
@AIKnowledge2Go
@AIKnowledge2Go 6 ай бұрын
Yes we can have you checked my kzbin.info/www/bejne/f5S0p4eYZs-Jd68&lc=UgyPOEGpbzo_PNSjhmN4AaABAg video yet? Just change the prompt to anime related styles.
@quercus3290
@quercus3290 9 ай бұрын
so why not hi-res fix exactly?, I feel this could be considerably faster, hi-res, inpaint if you have to then control net tile with tiled vae up to 16k, done.
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Experiment with ControlNet Upscale versus the high-res fix and you'll notice a difference. ControlNet tends to produce finer details, mainly because you have the option to increase the denoising strength. While this might not hold for every checkpoint, it's true for roughly 90% of them. Additionally, when fine-tuning your prompts, consider generating a large batch of images, say around 50, and then selecting the top 3. This process is considerably slower using the high-res fix. Inpainting works more efficiently with lower-resolution images too. Before you think about upscaling, ensure your composition feels complete. This foundational step is crucial for achieving the best overall quality in your final image.
@quercus3290
@quercus3290 9 ай бұрын
@@AIKnowledge2Gomate, I have over 9k images uploaded to civit, like, im generaly of the understanding when upscaling its better to start with as high a quality base image as possible, hence why you want to hires fix. And when you say 'Inpainting works more efficiently with lower-resolution images' what do you mean?, you really notice any difference?, compared to hires with adetailer first?. I dunno man, I think hires fix, possibly using adetailer if needed then, 2nd pass in img2img and upscale with tiled vae and Tiled Diffusion, which can also hook into CN for using tile preprocessor.
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
@@quercus3290 No Offense, but I'm genuinely interested: at what point in your process do you apply outpainting to achieve a 16:9 aspect ratio? Do you begin with an image already in 16:9 resolution? I’d love to understand your process for applying a high-resolution fix, as I’m eager to test it myself. It’s not that I’m questioning the effectiveness of your workflow, but based on recent surveys, about 90% of my audience is using Nvidia 2080 graphics cards or something older. Time efficiency is crucial due to this. Inpainting at high resolutions is significantly slower and requires more patience. If the outcomes are unsatisfactory, and numerous adjustments need to be made, it can become quite frustrating. For the majority of my viewers, the workflow I've showcased appears to be more practical. Given the volume of images you’ve uploaded, am I correct in assuming you’re working with at least a GeForce RTX 4080 or something more advanced?
@MikeRenouf
@MikeRenouf 5 ай бұрын
This is a really interesting back-and-forth. Not sure which method works best for me. I do think that having to start with a square image actually limits the range of compositions that you can get SD to produce though. SD will try to put all the relevant solutions into that initial square, and the outpainting potentially just adds 'filler' content. Starting with a non-square image might not yield the absolute best quality, but it might offer better composition?
@Mypstips
@Mypstips 8 ай бұрын
Amazing video! My final image has even too many details, it's crazy! Thanks a lot!
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
Glad it helped! Happy creating
@DeeTenF
@DeeTenF 9 ай бұрын
Thank you for the guide, it led to a fun discovery when i misremembered some of the settings. Turns out that SD ultimate upscale can be abused to make photomosaics. The results were both horrifying and beautiful. 10/10, would make a person made out of other people again.
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
You're welcome! It's fascinating how sometimes, what starts as an accidental setting can lead to both horrifying and wonderfully unique creations. I love that you're embracing the experimental side of things and discovering new possibilities.
@nikgrid
@nikgrid 9 ай бұрын
Just discovered your channel...subscribed! Great Vid.
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Welcome aboard!
@slaznum1
@slaznum1 9 ай бұрын
Sehr schön und informativ, aber Ihr Ton klingt hohl, vielleicht können Sie ihn hochskalieren!
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Hallo, danke für Ihr Feedback. Ich hab irgendwie noch nicht die richtigen Einstellungen mit dem neuen Mikrofon gefunden.
@kobusdowney5291
@kobusdowney5291 9 ай бұрын
This works with the new SDXL controlnet scripts too, except the preprocessor MUST be set to none
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Helpful insight on the SDXL controlnet scripts and the importance of setting the preprocessor to none. I'm on the hunt for an effective upscale workflow, so this is great info. Thanks!
@Eevee13-xo
@Eevee13-xo 6 ай бұрын
Would it be possible to upload all the images you created/used in each stage of this guide to an online host, i would be interested to see the difference between each stage in the process and may help other users by allowing them to see the difference also and maybe put some visual elements to the settings and options you have used. Nevertheless, excellent guide, detailed explaination and for what it’s worth the accent makes it 100 times better, you just earned my sub!
@AIKnowledge2Go
@AIKnowledge2Go 6 ай бұрын
Thank you for the suggestion! I'll definitely consider uploading the images for each stage to provide a visual reference for viewers.
@dlep9221
@dlep9221 9 ай бұрын
Thanks a lot, it a very good workflow. I tried it with another checkpoint, is perfect with your values!
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Great to hear! Happy creating.
@neosmith166
@neosmith166 5 ай бұрын
I tried to use your method to fix deformed fingers but miserably failed. But hey the model looks awesome!!
@AIKnowledge2Go
@AIKnowledge2Go 5 ай бұрын
Sorry to hear that. Is your ControlNet inpainting working properly? Sometimes the command window is full of errors, and you wonder why it isn't working. Also, try different denoising strengths and generate at least two images; sometimes the second image really makes a difference.
@MSWisdom-vz7tw
@MSWisdom-vz7tw 9 ай бұрын
Absolutely great video man. Thank you very much.
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
Glad you enjoyed it! Happy creating.
@ctrlartdel
@ctrlartdel 9 ай бұрын
I thought yaml files did not need to be downloaded
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
You are absolutely right, you don't need to do this in newer versions of A1111 anymore. I always add this because some of my audience may have older versions.
@ctrlartdel
@ctrlartdel 9 ай бұрын
@@AIKnowledge2Goi was wondering about that! Tried using version 16 17 15.2. I noticed some differences. Whats the best one! Im not liking 18.0
@TazzSmk
@TazzSmk 8 ай бұрын
process is so complex I'm no longer surprised why people move from A1111 to ComfyUI, great vid anyway! cheers :)
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
I completely understand where you're coming from. The process can indeed get complex, which is why I alternate between using ComfyUI and Automatic 1111 based on the specific needs of each project. For tasks like animate diff, ComfyUI is my go-to. However, when it comes to SDXL lightning with ComfyUI can present some challenges for me when it comes to upscaling.I tried Ultimate SD Upscale, K Sampler Upscale and SUPIR, But the results are still mediocre. I'm always on the lookout for tips and tricks to streamline these workflows, so if you have any suggestions or need advice on a particular aspect, feel free to share! Cheers and thanks for the support :)
@TazzSmk
@TazzSmk 8 ай бұрын
@@AIKnowledge2Go I wonder what's your opinion on recent Stable Swarm UI ? it's still in Beta, but basically combines A1111 and ComfyUI within one convenient web interface, with easy support for LAN use and multiple GPUs too :)
@MurphysPuppet
@MurphysPuppet 9 ай бұрын
Klasse Video. Ich feier deinen Dialekt so sehr! 😅😂 Hab schon ne Menge gelernt von dir Vielen HErzlichen Dank!^^ Schonmal darüber Nachgedacht die Videos auch mit deutscher Sprache nochmal hochzuladen?
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
Hi, danke für dein Feedback! Freut mich, dass dir mein Dialekt gefällt. Deisten deutschen sehen das leider anders... 😊 Ich habe einen deutschen Kanal (KIwissen2go), aber momentan fehlt mir die Zeit fürs übersetzen der Videos. Danke für deine Unterstützung!
@MurphysPuppet
@MurphysPuppet 8 ай бұрын
@@AIKnowledge2Go ich feier das! :D deineanderen Kanal werd ich mir auch gleich mal ansehen! Sag mal, könntest du mal ein Video machen indem du erklärst, wie man 2 personen unabhängig voneinander erstellt?
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
@@MurphysPuppet Danke, dass du meine Videos feierst und auch Interesse an meinem anderen Kanal hast! Die Erstellung von zwei unabhängigen Personen mit Stable Diffusion kann tatsächlich herausfordernd sein.. Fürchte für ein ganzes Video ist das ein bisschen kurz. Werde allerdings demnächst mit Shorts experimentieren für solche Quick Tipps. Vielleicht mache ich das tatsächlich zum Thema.Am einfachsten erreichst du das mit Inpanting. Wenn du einen Mann oder eine Frau im Bild hast kannst du versuchen mit Prompts wie zum Beispiel: image of 1 woman 1 man, he wears a black coat, she wears a red dress. Allerdings bei gleichgeschlechtlichen Personen, wird das Eher schlecht als recht funktionieren.
@SumNumber
@SumNumber 8 ай бұрын
Top secret ? I wonder if I will get in trouble for watching this ? :O)
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
Your mission, should you choose to accept it, involves keeping these "top secret" techniques under wraps. As long as you're discreet, you'll navigate this covert operation without any trouble. Welcome to the inner circle! 😉
@justinnelson6794
@justinnelson6794 6 ай бұрын
i wish there was a way to like a video more than once
@AIKnowledge2Go
@AIKnowledge2Go 6 ай бұрын
Your support means the world to me, and I wish I could like your comment more than once too!
@This_or_Thatx1
@This_or_Thatx1 4 ай бұрын
i fucking dont have dpm++ 2 kuras, cant find please help i m angryyyyyyyyyyyyyyyyyyyyyyyy
@AIKnowledge2Go
@AIKnowledge2Go 4 ай бұрын
Please don't be angry, since one of the last updates, the sampling method and schedule type have been separated. You can find it in the dropdown next to it. Actually the automatic setting should be fine.
@arti_cool4686
@arti_cool4686 9 ай бұрын
supportive comment. thanks for the guide! ❤‍🔥
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
I am glad you liked it. Happy creating
@zoltronborgman1006
@zoltronborgman1006 8 ай бұрын
upscale at 7:06 if anyone wanted to know ♡
@DezorianGuy
@DezorianGuy 9 ай бұрын
Can you change your audio setup? It is a pain to listen to your voice. Get a better mic, use filters, record audio in another room, turn down the volume, do something.
@MurphysPuppet
@MurphysPuppet 9 ай бұрын
he started not long time ago.. equipment costs money.. you will have to live with it for now.. untill he can effort it.
@MyloSkeng
@MyloSkeng 8 ай бұрын
It’s not even that bad at all
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
Thank you all for your feedback and support. I'm currently in the process of exploring better audio editing techniques. From here it gets only better :)
@DezorianGuy
@DezorianGuy 8 ай бұрын
@@AIKnowledge2Go Deine Videos sind irgendwie einzigartig, du gehst ins Details. Alle anderen Videos im Netz sind beginner Guides, die alle das gleiche erzählen und man kaum was neues lernt. Mach weiter so in depth guides, keep up the good work.
@Kimkam85
@Kimkam85 9 ай бұрын
CTRL + ENTER... Thank you
@AIKnowledge2Go
@AIKnowledge2Go 9 ай бұрын
You are welcome, i figured it out by accident 😊
@GS195
@GS195 9 ай бұрын
I'm on Forge and when I do the Inpaint Upscale step, I get this error and the resulting image is completely different: *** Error running process_before_every_sampling: D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py Traceback (most recent call last): File "D:\AI\webui_forge_cu121_torch21\webui\modules\scripts.py", line 835, in process_before_every_sampling script.process_before_every_sampling(p, *script_args, **kwargs) File "D:\AI\webui_forge_cu121_torch21\webui\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 555, in process_before_every_sampling self.process_unit_before_every_sampling(p, unit, self.current_params[i], *args, **kwargs) File "D:\AI\webui_forge_cu121_torch21\webui\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\sd_forge_controlnet\scripts\controlnet.py", line 497, in process_unit_before_every_sampling cond, mask = params.preprocessor.process_before_every_sampling(p, cond, mask, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\webui_forge_cu121_torch21\webui\extensions-builtin\forge_preprocessor_inpaint\scripts\preprocessor_inpaint.py", line 27, in process_before_every_sampling mask = mask.round() ^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'round' Do you know what is going on?
@AIKnowledge2Go
@AIKnowledge2Go 8 ай бұрын
I'm sorry to hear you're encountering this issue. While I'm not deeply familiar with Forge specifics, this error suggests there might be a problem with the input mask. Since we dont use any it could a bug in controlNet for Forge. Can you confrim you used the rigth preprocessor? If you haven't already, consider seeking advice on forums or communities dedicated to Forge or similar AI tools; they might have encountered and resolved similar issues. Good luck, and I hope you find a solution soon!
@gabogavidia
@gabogavidia 6 ай бұрын
Thanks for share Bro. Keep it clear and honest, you're doing and amazing job. 🖤
@AIKnowledge2Go
@AIKnowledge2Go 6 ай бұрын
I appreciate that, thanks
@zivadia3398
@zivadia3398 4 ай бұрын
wo sind die ganzen deutschen die sagen HOHO DER IST DEUTSCH.
@AIKnowledge2Go
@AIKnowledge2Go 4 ай бұрын
Vielleicht sind die noch auf Malle? 😂
@zivadia3398
@zivadia3398 4 ай бұрын
@@AIKnowledge2Go danke für das video bruder xD
The EASIEST way to Object Placement in Stable Diffusion A1111
8:32
AIKnowledge2Go
Рет қаралды 6 М.
Best Practice Workflow for Automatic 1111 - Stable Diffusion
8:00
AIKnowledge2Go
Рет қаралды 253 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 15 МЛН
Enceinte et en Bazard: Les Chroniques du Nettoyage ! 🚽✨
00:21
Two More French
Рет қаралды 42 МЛН
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
Multi Diffusion for A1111 - Super Large + LOW Vram Upscaling
8:19
Olivio Sarikas
Рет қаралды 55 М.
How to AI Upscale with ControlNet Tiles - High Resolution for Everyone!
18:16
Save hours by mastering THIS Tool Like a Pro in 2024!
12:24
AIKnowledge2Go
Рет қаралды 2 М.
Don't make these 7 mistakes in Stable diffusion.
8:03
Sebastian Kamph
Рет қаралды 92 М.
Create consistent characters with Stable diffusion!!
26:41
Not4Talent
Рет қаралды 224 М.
Stable Diffusion Prompt Guide
11:23
pixaroma
Рет қаралды 46 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 158 МЛН