NEXT-GEN MULTI-CONTROLNET INPAINTING! You’ve NEVER SEEN THIS BEFORE!

  Рет қаралды 111,636

Aitrepreneur

Aitrepreneur

Күн бұрын

Пікірлер: 231
@Aitrepreneur
@Aitrepreneur Жыл бұрын
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
@Leto2ndAtreides
@Leto2ndAtreides Жыл бұрын
Have you seen Corridor Crew's video "Did We Just Change Animation Forever?", that seems super worth experimenting with.
@dimitrishow_D
@dimitrishow_D Жыл бұрын
i have some cartoon characters , see my profile ....i tried training stable diffusion with this but it cant seem to replacate this...the training only seems to work with things it created itself ..am i doing something wrong or is it that my character is unique and it doesnt have enough reference...when i did kinda get it to worrk with i think astria...it drew it like a 5 year old ..if it is possible to train with your own 2d characters PLEASE make a video about that
@long20014
@long20014 Жыл бұрын
Hi, could you share the excel file about color?
@MissAIchan
@MissAIchan Жыл бұрын
How did you enable the guess mode?
@richardgreaney
@richardgreaney Жыл бұрын
This is getting so mind blowing now. There are just so many different possibilities for how to make a good image now. I almost feel like I need to take a step back for a bit, and think about what I would really like to create, then see what technologies are now available that could help me achieve this.
@kyoko703
@kyoko703 Жыл бұрын
With so much "tuning" and manual work, it's pretty safe to say that these "generative" works are more human than anything else and demonstrates that these are just tools at the end of the day.
@dv6165
@dv6165 Жыл бұрын
In this case very cumbersome tools for a pretty simple operation
@gara8142
@gara8142 Жыл бұрын
@@dv6165 True, but only if we take it at face value, if you wanted to change an image from realisitc to , idk, helltaker style manually, it would take way longer. As with most tools it's a matter of use cases.
@user-zi6rz4op5l
@user-zi6rz4op5l Жыл бұрын
With Generative AI, you never get the image you wanted, but end with some image which looks pretty after a million random pushing of parameters. Can you get deterministic result on the next attempt. NO. It would be better if these models churn out 2k+ resolution as the upscaling is again a million guess on the parameter buttons 😔😔
@markusm108
@markusm108 Жыл бұрын
@@user-zi6rz4op5l you essentially describe what a customer feels when he hires an artist or designer. what these tools do is they put you into a different role - not the artist but the art director.
@vi6ddarkking
@vi6ddarkking Жыл бұрын
We Are Getting So Much Control So Fast. I Honestly Don't Think even 6 month ago anyone thought we would have advanced thim much this quickly.
@wakegary
@wakegary Жыл бұрын
and it's exponentially growing. video is a month or two away (or less). then come the apps, the corporate control, the tightening laws on models, the underground trade, the bootleg versions blah blah. it will be fun to watch - we're in the wild wild west phase right now
@cdreid9999
@cdreid9999 Жыл бұрын
This reminds me of when personal computers became available and a lot of us learned to program
@Mocorn
@Mocorn Жыл бұрын
I can see a quality difference in my output folder from just two weeks back! This is crazy.
@Dex_1M
@Dex_1M Жыл бұрын
dude, i need to watch this video 10 times with some personal experimentations, this new multi control net is a masterpiece
@kinnectar820
@kinnectar820 Жыл бұрын
Holy shit, so much progress in AI image gen in so little time, my dreams of doing my own graphic novels using ML are pretty much fully within grasp using these techniques. What a time to be alive!
@Gxbbzee
@Gxbbzee Жыл бұрын
I've been looking at separate tutorials for the past few days, and even those combined didn't give me as good and precise information as this single video did... amazing! Very much appreciated!
@Real28
@Real28 Жыл бұрын
I called it. 6 months ago I said that while it seems like this kind of tech should take a year, it would take half the time. There's just too many people grinding away at these tools simultaneously. Just crazy.
@johannvandebron986
@johannvandebron986 Жыл бұрын
Nice! You have the best Stable Diffusion / Automatic 1111 content on YT. Thank you so much for letting us know all this!
@Mocorn
@Mocorn Жыл бұрын
That trick to crudely draw lines to generate edge lighting worked way better than I thought it would. How on earth does it bleed onto the nose like that? This is some wild shit!
@dreamzdziner8484
@dreamzdziner8484 Жыл бұрын
For someone who had been playing with Controlnet from day one just to blend a foreground character perfectly to a desired background perfectly; this video is a treasure. I tried many combinations but never ever thought of that inpainting trick you showed. You are the absolute best dear friend. 👌💪👍👏🧡
@jurandfantom
@jurandfantom Жыл бұрын
have you managed to make it working? for me it change whole picture and "masked only" other hand make things incorrect.
@DESIGNISTASTY
@DESIGNISTASTY Жыл бұрын
This is why I love SD because you have total control of what you want to do and how your image will be.
@uk3dcom
@uk3dcom Жыл бұрын
I have been trying to do the things you have now solved for hours. Thank you for your experimentation and sharing. The fact that you and a couple of other guys on KZbin are feeding off each other and pushing this forward to simplify our efforts is really appreciated. Please keep up the good work.☺
@설리-o2w
@설리-o2w Жыл бұрын
Can't wait WHAT A TIME TO BE ALIVE AND FIRST!!!
@fnorgen
@fnorgen Жыл бұрын
Now this is the level of control I was badly missing back in January!
@oblivionronin
@oblivionronin Жыл бұрын
15:41 That image is absolutely insane, i love it !
@winkletter
@winkletter Жыл бұрын
This gives me an idea for an experiment: Comics made with multi ControlNets. One for the frame. Add characters in with OpenPose. Then segmentation for specific objects.
@f0kes32
@f0kes32 Жыл бұрын
how would you draw the same character in different poses?
@anastasiaklyuch2746
@anastasiaklyuch2746 Жыл бұрын
@@f0kes32 see previous video ;)
@M.I.F..
@M.I.F.. Жыл бұрын
Charturner....(civitai)
@wakegary
@wakegary Жыл бұрын
I just madea Steve Urkel in the style of EC Comics (Tales from the Crypt) and I'm very weirded out. AKA it's awesome
@Strangepaper
@Strangepaper Жыл бұрын
@@wakegary what model for ec comics?
@schenier
@schenier Жыл бұрын
I find that all this shows that this tool can be used by real artists and create little by little an image, from background, to pose, all the way to the lighting
@coda514
@coda514 Жыл бұрын
Controlnet is amazing, and thanks to you I understand it more and more every day. Thank you from the bottom of my heart. Sincerely, your loyal subject
@Artishtic
@Artishtic Жыл бұрын
Thanks for the epic tutorial! If there were only a photopea version of After Effects too.
@travelwell6049
@travelwell6049 Жыл бұрын
Interesting. I have not seen anyone talking about controlnet. So it’s great to see. Thanks.
@bentp4891
@bentp4891 Жыл бұрын
Amazing. Probably the most impressive new features since Dreambooth.
@human-error
@human-error Жыл бұрын
Best controlnet video on the net. Tnx NON-HUMAN !
@MrBlitzpunk
@MrBlitzpunk Жыл бұрын
I haven't tried the multiCN, but with a single CN if you use the hed model it's already good for changing the style of a single image, it works kinda like depth map but preserves the detail and lines of the image
@TrancorWD
@TrancorWD Жыл бұрын
Very cool seeing the multi-control net stuff. Last few days I've been adding in support for interactive visualization of control net outputs as a pre-pre-processor. Since it's annoying poking at values without knowing what the outcome will actually be. And I figure, if I'm doing that for fun, other people are probably doing this as well on the sd-webui-controlnet extension team. Cause the capability is all there for feedback from the ControlNet preprocessor in A1111, just a matter of connecting up the hooks for it.
@PhilippSeven
@PhilippSeven Жыл бұрын
You can Inpaint character on the background much easier: there is a tab for that - “inpaint upload”. Just use depth pic as a mask. No need to draw something, and result is much cleaner.
@krz9000
@krz9000 Жыл бұрын
dude...what a treasure chest of a video!!
@bolbolzaboon
@bolbolzaboon Жыл бұрын
This video right here worth so much that I'm gladly going to join Patreon.
@Nickknows00
@Nickknows00 Жыл бұрын
Awesome man!!! So much new I didn’t realise you could do!
@evylrune
@evylrune Жыл бұрын
Damn, the sketch lighting trick is pretty cool.
@autonomousreviews2521
@autonomousreviews2521 Жыл бұрын
You never waste my time :D Thank you for the depth and detail.
@garen591
@garen591 Жыл бұрын
Thats really cool. Could you talk about how to change an object perspective too? Say you want a front, top, side or isometric view of an object
@lioncrud9096
@lioncrud9096 Жыл бұрын
damn, this was like 20 tutorials in one! Awesome content Mr. Aitrepreneur
@cruhstin
@cruhstin Жыл бұрын
Fantastic tips! It would be awesome if you add checkpoints/timestamps to this video so I can quickly go to a spot in the video if I want to review a specific trick you showed off. Keep up the great work 😀
@AscendantStoic
@AscendantStoic Жыл бұрын
Fantastic collection of tips & tricks, nice work ;)
@hexemeister
@hexemeister Жыл бұрын
Your channel is awesome, but I am overwhelmed with so much info. Is there a newbie playlist to start from beginning to catch it up?
@joshuajaydan
@joshuajaydan Жыл бұрын
You sir are insanely talented. Thanks for sharing.
@maddercat
@maddercat Жыл бұрын
wow, i had no idea this was possible, that's insane...I gotta wrap my head around it, it's difficult even seeing you do it. lol
@benjamininkorea7016
@benjamininkorea7016 Жыл бұрын
Brilliant. The segmentation index is completely crazy, and I never would have found it. I gave up on segmentation almost immediately as useless, but I was wrong! Do you have any workflow yet for altering the appearance of exisitng human faces not from prompts? Especially, I want to use my face in a new scene but not with the same lighting taken in my living room-- but add god rays etc. to it.
@muuuuuud
@muuuuuud Жыл бұрын
Unless I'm mistaken i think Lora is your answer
@Mocorn
@Mocorn Жыл бұрын
​@@muuuuuudor dreambooth or textual inversion or hypernetwork.
@sunlightg
@sunlightg Жыл бұрын
I agree with the previous commentator. Just train your own lora model with your own face and then use it as you wish. There is a good video in this channel about training LoRa, so just follow the instructions. Just don't forget to read the comments under the video because there are some important things to add :D
@ashokp9260
@ashokp9260 Жыл бұрын
Hey this is so incredible... tools for infinite creativity at 0 cost. Also, I feel I am light years behind, if I forget to follow develpments even for a week.
@devnull_
@devnull_ Жыл бұрын
Like you didn't already have possibility for "infinite creativity" with pen and paper or photoshop.
@ashokp9260
@ashokp9260 Жыл бұрын
@@devnull_ Like everyone on earth is so artistic like Davinci.
@thethiny
@thethiny Жыл бұрын
Why did you turn off the pre processor for Open Pause while making the man dancing in the living room?
@aaronhhill
@aaronhhill Жыл бұрын
"Because I'm a madman." Hahahahaha! Love you man, you're great!
@TheAiConqueror
@TheAiConqueror Жыл бұрын
troll king 👑 no seriously, i love your video, your workflow tricks. I could watch you for hours 😁👍
@HolidayAtHome
@HolidayAtHome Жыл бұрын
remember the time where txt2 image with 1.4 was the only thing we could do back then =D ?
@estrangeiroemtodaparte
@estrangeiroemtodaparte Жыл бұрын
I remember discovering disco diffusion and thinking it was magical! lol Things are changing fast!
@IceMetalPunk
@IceMetalPunk Жыл бұрын
Yeah, less than a year ago 😂
@luke.perkin.inventor
@luke.perkin.inventor Жыл бұрын
Great video, so many useful tips!
@OriBengal
@OriBengal Жыл бұрын
whoa! Totally stands out from the other guys :) -- Great techniques!
@Apothis1
@Apothis1 Жыл бұрын
Thankyou so much for all these helpful vids, maes this super easy to understand, really appreciate it
@ColoNihilism
@ColoNihilism Жыл бұрын
awesome where do we get some here do we get some lighting examplars ?
@vi6ddarkking
@vi6ddarkking Жыл бұрын
Would it be possible to use Multiple Hypernetworks With Multi-Controlnet? To for example, compose an image with multiple different characters with specific outfits?
@moneyjuice
@moneyjuice Жыл бұрын
that's insane how quick the AI tech is evolving
@6666daf
@6666daf Жыл бұрын
Really good techniques.
@mariokotlar303
@mariokotlar303 Жыл бұрын
Thank you so much! This was very helpful!
@karl-heinzbiederbick87
@karl-heinzbiederbick87 Жыл бұрын
Wow! Thank you for sharing.
@victorwijayakusuma
@victorwijayakusuma Жыл бұрын
Thank you for this! YOu are wonderful! by the way how to do the excel thing or is there any other program to see the segmentation list?
@Kryptonic83
@Kryptonic83 Жыл бұрын
awesome collection of tips, I've been loving playing around with controlnet lately. keep up the great work!
@lucstep
@lucstep Жыл бұрын
Really cool! But is it possible to combine photos, like shown in the video, but not originating from SD (for example a real photo of a background with a real photo of someone) ?
@Tsero0v0
@Tsero0v0 Жыл бұрын
What if we use multiply controlnet and all of them with guess mode or without it? Is there any different?
@asciikat2571
@asciikat2571 Жыл бұрын
Craaaaaazzzzzzy! Love it, You are the strongest
@rincondesalva
@rincondesalva Жыл бұрын
Awesome video!... Could you please (in any control-net video) tell us which requirements about VRAM specially) should everyone need in order to run in local properly this addon?...I guess my rtx3070 mobile with 8gb is almost uncapable...
@haan3388
@haan3388 Жыл бұрын
do you have an in depth vid on the inpaint trick you explained around 5:00?
@alexb6969
@alexb6969 Жыл бұрын
You r the best! This stuff is so cool! I can't stop to admire this new tool)
@KonstantinRozumny
@KonstantinRozumny Жыл бұрын
Great video! How to add two different characters to the same background?
@RyanBigNose
@RyanBigNose Жыл бұрын
can you do a totorial on "Inpaint batch mask directory (required for inpaint batch processing only)" there is no tutorials for it
@fredmcveigh9877
@fredmcveigh9877 Жыл бұрын
Very informative and inspiring .Thankyou .
@zhizui
@zhizui Жыл бұрын
omg!!这真的太有用了!看一次可能都学不完里面的知识
@gameswithoutfrontears416
@gameswithoutfrontears416 Жыл бұрын
Thanks. Wow. Pretty cool stuff. So many possibilities 👍
@Actual_CT
@Actual_CT Жыл бұрын
finally 3d texture workflow assistance...with precision
@Eagleshadow
@Eagleshadow Жыл бұрын
Super useful, thanks!
@leowei771
@leowei771 Жыл бұрын
Good lord, this is moving so fast that I can barely keep up with all this new stuff.
@cienciaemutopia
@cienciaemutopia Жыл бұрын
Nice tips and video,from brazil
@hatonafox5170
@hatonafox5170 Жыл бұрын
You sir are my hero!
@OndrejL
@OndrejL Жыл бұрын
wow bruh great discoveries and thx for the share
@jimdelsol1941
@jimdelsol1941 Жыл бұрын
Absolutly amazing. Thank you.
@megumin4625
@megumin4625 Жыл бұрын
The multi controlnets are now tabbed. Niiice
@GerwaldJensRadsma
@GerwaldJensRadsma Жыл бұрын
WooooOOOooooW Good as always!
@mistercapitale
@mistercapitale Жыл бұрын
Aitrepreneur, is it possible to use a logo and place it on a shirt with any of control net models?
@muuuuuud
@muuuuuud Жыл бұрын
Thanks K for the powerful knowledge, have a wonderful weekend! ^__^
@lntcmusik
@lntcmusik Жыл бұрын
This is great! Thanks for sharing 🙂 Do you have an idea on how to create an animated, let's say, sticker for any messanger using Stable Diffusion? That would be interesting.
@danilo_88
@danilo_88 Жыл бұрын
This is pretty cool. I love your videos. You could use more models other than just anime and cartoon.
@velly027
@velly027 Жыл бұрын
I don't know if it's possible yet because I lost the overview of the newest developments. I have 2 trained models of 2 different characters. I want place both characters in one image with a chosen background. Both characters should be arranged with open pose. Then the whole image should be created with diffusion in one step so that the lightnings and the shadows are correct over the whole image. Is this possible yet? Maybe I must wait another week :)
@goldenknowledge5914
@goldenknowledge5914 Жыл бұрын
Its crazy. So much things to learn just for one feature. It is indeed the age of AI
@doobertoob4266
@doobertoob4266 Жыл бұрын
Lol finally. Guess mode is pretty much what Midjourney does by default
@jtmcdole
@jtmcdole Жыл бұрын
Now we just need a node-based flow plugin to make the rotoscoping easier.
@Nickknows00
@Nickknows00 Жыл бұрын
Now all I want to is to have SD generate my character in new poses
@FleischYT
@FleischYT Жыл бұрын
great tip, appreciated
@Dex_1M
@Dex_1M Жыл бұрын
i think using 2 control nets, one with the canny or depth, and one with the pose, and playing only with the pose around and not changing the canny or depth, well make you control the characters movement while keeping the details, i assume this is true and if you can get this right, you can make a lot of images, and thus having an animation.
@fernandoz6329
@fernandoz6329 Жыл бұрын
Awesome tricks
@in2thedark
@in2thedark Жыл бұрын
You are the man! 🤖 thanks for the tips
@mandapanda5252
@mandapanda5252 Жыл бұрын
I have been going through this one by one, not getting any of the same results, feeling a bit defeated. I was most excited about transferring a character onto a background, but no matter what I do it changes the character completely. I have adjusted the denoise and weight so many times. Perhaps it struggles with full body characters?
@cameron7814
@cameron7814 Жыл бұрын
If you figure this out please lmk, also having trouble transferring characters into a background. The depth and canny models seem to be working fine but the character always shows up non detailed and almost transparent to the background no matter which settings I change.
@thanksfernuthin
@thanksfernuthin Жыл бұрын
Do we have anything to aim the eyes with yet? It's driving me nuts! All this control over poses and other things and my characters are always looking at the damned camera. I am super happy with everything they've added. So cool. Being able to say where the character is looking would be huge.
@EpochEmerge
@EpochEmerge Жыл бұрын
Get the final image, copy HED preview and redraw the irises in the hed map and then use it with control net
@EpochEmerge
@EpochEmerge Жыл бұрын
If you can't draw, get a photo of yourself in the same camera perspective and then Photoshop your eyes on the final image and again he'd + img2img with final image and denoise somewhere 0.4-0.5
@sunlightg
@sunlightg Жыл бұрын
I would recommend you to use inpaint sketch option to change pupil location with the "only masked" option and low denoise.
@thanksfernuthin
@thanksfernuthin Жыл бұрын
@@EpochEmerge I don't know anything about HED. You got me all excited!!! I thought it might be like the pose one but for the head and eyes. Looks kind of like a sketch. To get the best stuff I pose a scene in Poser and render an image to bring into img2img complete with premade depth map. So the eyes are pointing exactly where I want them from the start. Or did I try the wrong thing?
@EpochEmerge
@EpochEmerge Жыл бұрын
@@thanksfernuthin I think with control net there are sooo many workflows. We are only scratching the surface From my experience depth map for precision, normal maps for more 'artisic' and free output, same as hed gives more "freedom" to stable than canny
@justinyuvilla8944
@justinyuvilla8944 Жыл бұрын
How can you show a photograph into the reference ouput with the correct exact lighting style you want and have the ai match that very same lighting on your original image as well?
@ItsBrody
@ItsBrody Жыл бұрын
Can you add chapters to your videos? Sometimes I want to come back to specific parts but I can't remember where they are
@philipZhang86
@philipZhang86 Жыл бұрын
great tutorial~~
@cesar4729
@cesar4729 Жыл бұрын
You missed the best sketch trick, were you use colors to add things or transform the picture. For that you need the same picture in controlnet and play around. Is way more powerful than the segmentation trick.
@resonantone3284
@resonantone3284 Жыл бұрын
Installed it yesterday, and boy... I'm still struggling to figure it out. If it's going to do what I think, wow... Why do all the cool toys have to come out when I'm on a deadline?!??!? It's a plot I tell you!
@jurandfantom
@jurandfantom Жыл бұрын
Interesting thing. ControlNet f up, upscale extension. Even when not active, it still break SD and ultimate SD upscaler. Lucky just turn extension off fix everything :)
@earthpond8043
@earthpond8043 Жыл бұрын
Bro you keep blowing my mind damn gj
@flonixcorn
@flonixcorn Жыл бұрын
Very nice!
@Krougher
@Krougher Жыл бұрын
That s insane !
@nganpoiis8961
@nganpoiis8961 Жыл бұрын
Is there a contolnet rig available for blender?
GET PERFECT HANDS With MULTI-CONTROLNET & 3D BLENDER! This Is INSANE!
18:42
What type of pedestrian are you?😄 #tiktok #elsarca
00:28
Elsa Arca
Рет қаралды 29 МЛН
AI images just got WAY too real. FLUX 1.1 deep dive
33:15
AI Search
Рет қаралды 163 М.
Getting Started with IP Adapter (2024): A1111 and ComfyUI
14:07
Professor Lich
Рет қаралды 677
Why Are Open Source Alternatives So Bad?
13:06
Eric Murphy
Рет қаралды 691 М.
Stable Diffusion - img2img - ПОЛНОСТЬЮ
46:35
XpucT
Рет қаралды 230 М.
ANIMATEDIFF COMFYUI TUTORIAL - USING CONTROLNETS AND MORE.
24:54
enigmatic_e
Рет қаралды 100 М.
How to Inpaint In Fooocus
5:49
Monzon Media
Рет қаралды 63 М.
Master AI image generation - ComfyUI full tutorial 2024
1:18:44
AI Search
Рет қаралды 88 М.
Comic Characters With Stable Diffusion SDXL
7:57
Sebastian Torres
Рет қаралды 31 М.
This AI deepfake is next level: Control expressions & motion
29:55
What type of pedestrian are you?😄 #tiktok #elsarca
00:28
Elsa Arca
Рет қаралды 29 МЛН