Your UI is so beautiful and brilliant. Thank you and all the developers for your great work🙏🙏🙏.
@tomschuelke7955 Жыл бұрын
You need more follower.. its the best interface for Sd so far.. i got many ideas to improve this.. but its realy allready good
@gabriel3888 Жыл бұрын
Thanks for all the hard work you put into these resources
@AdrianLopez_vfxwolf Жыл бұрын
Really enjoying the series. Would love to see you using the canvas to create some photoreal work. Everything so far has been 'painterly' or illustrative, which I think gives you more leeway.. I'd like to see how the canvas can work in the hyperreal workflow.
@blizzy78 Жыл бұрын
6:24 Where to find the description of the infill algorithms? Thanks. Edit: Found it on the Discord. There's not a real description coming with it, it's basically just the image shown in the video.
@ikeo1 Жыл бұрын
Really enjoying these videos. Thanks for creating them
@SonnyNguyen Жыл бұрын
Thanks for great video 🙏🙏 . I hope the next videos would be 2K resolution/quality or above!
@brockoala2994 Жыл бұрын
Thank you so much for this. Can't wait for the detailed tutorial on Workflow!
@Connor3G Жыл бұрын
Thanks for your work on this
@void2258 Жыл бұрын
Looks great. Could we have an option to delete a choice while picking (before you accept one)? It's hard to narrow down if you have can't remove rejects as you go.
@carlosbenitez64388 ай бұрын
Love the videos. If i had any suggestion or wish list i would be for you guys to create a course that takes a user through a week by week basis. The youtube videos and tuturials are great but its still challenging especially when you're trying to understand the details within. LIke loras, control net specifics, etc. Still amazing though thank you!!
@eartho Жыл бұрын
seems like the mask edge blur should be relative to the pixel dims of the mask? Or calculated after res up instead of before? and i've been wondering... how expensive would it be to run auto-segmentation after generation? Especially if those segs could be semantic. Having those masks already tagged and created would open up all sorts of possibility.
@invokeai Жыл бұрын
Something we are keeping an eye on! :)
@metamon2704 Жыл бұрын
Great explaining 👍
@MsReclusivity Жыл бұрын
I missed how you got the hair to just do the highlights in there because it went by so fast!
@invokeai Жыл бұрын
Just using a lower denoising strength and toying with prompt!
@USBEN. Жыл бұрын
I would like you to make a video with control net fixes in existing image. Mainly how to fix hands after all the general character is created? An option to modify control net pose in UI would be great to have for that.
I am having trouble with bounding box movement and control 1:15. The problem is I cant resize and move it freely. It sometimes works if I am lucky, one moment i can move the bounding box, next moment, its gone. Also, I have another problem 5:15. When I use to inpaint, it will generate only noise. It won't generate the desired prompt no matter how much the denoising strength I use lower or higher. Another problem I am having is 5:50. When I try to extend the picture, it won't match the color scheme and theme of picture rather it will create a completely new picture. I am using default settings still getting issues. These 3 issues are messing me up. Please help me out.
@lucvaligny5410 Жыл бұрын
hi , and thank you for this great tool, and nice videos , really inspiring>>> Is there specific settings for outpaint to avoid to see the line / separation between the original image and the new part created ? is there a doc somewhere where all controls are explained one by one , specially for canvas tab>>> Thanks again
@invokeai Жыл бұрын
Typically you want to use patchmatch infill and a higher denoising strength. You can always inpaint that are after expanding the image, as well.
@vladimmi Жыл бұрын
Probably a dumb question but... Why is UI on video a bit different to UI I can see on local app? Like, invocation queue, multiple prompt fields... Are those some work-in-progress features or, maybe, special version for some SaaS?
@invokeai Жыл бұрын
You'll see this UI when 3.2 gets released broadly. Some of these features have been released to our SaaS customers a bit early, however they're also available in our 3.2 release candidate to OSS users (RC meaning that it's still being finalized as a "release" and may have some bugs/quirks)
@vladimmi Жыл бұрын
@@invokeai Thanks!
@relaxandlearn7996 Жыл бұрын
would be nice to have an option that saves individual mask+bounding box position and ratio at the same time with an specific promt. So we can make promt for specific mask+bounding box layers. And after this it would be nice to select an layer we are want to work on and automatically the promt we used here pops up or an empty promt box for this layer. It sounds complicated but it isnt.
@eartho Жыл бұрын
yes, this is the dream of semantic segmentation. You're already describing the scene, so the AI should be able to auto-mask and tag all the areas from the prompt.
@tomschuelke7955 Жыл бұрын
and something more.. on the left side.. below.. the model icon... whats that new? want to have it...
@ReidDesigns Жыл бұрын
Great tutorial thanks. Question: can I, for example, do an initial drawing and or overlay in another image processor instead of using the infill paintbrush naitive to invoke canvas? Just cause if I can use a stylus or Apple Pencil would that be better? Or nah? Thanks
@invokeai Жыл бұрын
Yep!
@mada_faka Жыл бұрын
Wow thanks for video, please keep more powerfull than midjurney
@q07906 Жыл бұрын
Really helpful video!
@kaipnors Жыл бұрын
Is it possible to get 'Monster QR Code Control' network working with Invoke AI?
@invokeai Жыл бұрын
Yep. Just import the Huggingface repo using the model mgr
@ZeroCool22 Жыл бұрын
InvokeAI support Inpainting with any SDXL model?😮
@invokeai Жыл бұрын
Yep!
@aknaysan6666 Жыл бұрын
Hello, I am an industrial designer, can we use the same methods (using a suitable model) for example in a product design? thanks
@systemfehler Жыл бұрын
short question: are inpaint models required for better results? i do not find some for my fav checkpoints.
@havemoney Жыл бұрын
Will there be a version for AMD for Windows?
@blisterfingers8169 Жыл бұрын
Didn't turn the ControlNet on when doing the initial wizard. ♥
@invokeai Жыл бұрын
Indeed. RIP
@ceyhunakar1450 Жыл бұрын
Is there a free way to use invokeai in kaggle notebook, bc free colab isn’t working
@AlterEgo82 Жыл бұрын
Were you not supposed to enable the Control Net when doing the desert mage?
@invokeai Жыл бұрын
💀 yes.
@tiagotiagot Жыл бұрын
Would crossfading the denoising strength outside the mask, from zero to the set value (outside-in, like using the blurred mask to modulate the denoising level), let you take care of the seams in a single pass, without needing a separate pass to blend the seams? Or is it not possible to perform generation with per-pixel denoising levels? Does the blending look wrong no matter which transition curve you use?
@invokeai Жыл бұрын
Unfortunately can’t do pixel-level denoising.
@tomschuelke7955 Жыл бұрын
Oh hoooo first... whats this funky new buttons aside from invoke... secound... seems you have an invoke "renderqueuque".. third. i want to have in the controlnets this "IP Adapter"... isnt it imageprompt? Pleaseeeeeeee
@ffffennek11 ай бұрын
"look at this beautiful visualization of infill techniques! oh, didn't I tell you how to do even a basic infill? well - just look at the pretty colors then!" - 'fundamentals', my a**.
@Ubuntujukk11 ай бұрын
You use a lot of hotkeys. I'd like a video on that.
@drecksyutube9 ай бұрын
I dont know... the inpaint system in Automatic1111 (or rather stable diffusion) is way better in my opinion. Still thanks for the helpful video
@Afrasayabful Жыл бұрын
"What is this"
@tomschuelke7955 Жыл бұрын
This is great
@WifeWantsAWizard Жыл бұрын
Nice video. Here are my thoughts. (8:08) "...back into the original image with all of that extra detail". *takes deep breath* 256x256 is always 65,536 pixels. Period. There is no "extra detail". It is 65,536 every time. By dumping from 589,824 back down you've removed 88.9% of any new pixel data. His face might be different, but there's no reason you couldn't have just added noise (like "*#%^&$&*") to the prompt to perturb the results instead at 256x256 saving time and electricity. Also, he's no longer smiling. (21:29) "We're going to have to come back to fix that..." So, ***more*** work? Even Genmo--which is a horrifying product--guarantees the integrity of its masking boundaries. (26:50) "What is this? Where did this guy come from?" If you step back to the 26:00 mark, the other person on the stage that you wiped out with your "C" brush, you a) didn't get all of her where the backdrop meets the stage, and b) you forgot to remove her shadow (slight as it was) so the AI sees the shadow and assumes something should be there. (30:49) Alright, now THAT is really cool. (34:32) There's no shame in using two tools. You can export to PhotoPea, select by color (white), deselect the edges to leave just the areas around the kid, delete that, then re-import and cut this proposed workflow time in half.
@tomschuelke7955 Жыл бұрын
When you rescale the smal 256pix box area to a bigger size, the ai is much much better in creating realistic "details" thats the reason why you first scall it up. even if you shrink it back afterwords.. the thing is , that what is left is much better more correct as if you just tried it in a smal resolution, because the ai was trained to make better images in that size. so mathematicaly you are right. but never the less. the final result couldnt be acheaved without scaling up first.
@tomschuelke7955 Жыл бұрын
at 21:29 i think that is what happens if you paint your mask to near the box... because then there are no pixels left to blur first and afterwords use the smal img to image pass... if you avoid getting to close to the boxed dotted area.. this doesnt happen... still can be improved. but it was not the foult of the programm.