Ok no joke, having a low-end graphics card (RTX3050), i can't run blender fast enough to pose, however, the method you provided using the pose website was a savior, thank you so much for your help, and i gave the vid a like
@LevendeStreg Жыл бұрын
Thank you so much, Thai Do - I really appreciate it and glad I could help. Thanks for watching and commenting!
@eddiedixon1356 Жыл бұрын
Awesome! Thank you!
@thaido1750 Жыл бұрын
@Capitan Nerevar well yea when comparing it to 3060 and 3070
@LevendeStreg Жыл бұрын
@Capitan Nerevar yes you at the very leasy need RTX 3090 TI to run the extension and Blender. And that card is almost impossible to find.
@thaido1750 Жыл бұрын
@Capitan Nerevar welp i beg to differ, i just upgraded to RX 6700 XT 12gb (Equivalent of 3070) and the margin of difference is vast, and we're talking about rendering here, you can't render fast or detail when you only have 4Gb of VRAM, you need at least 8Gb of VRAM in order to render something, and from a guy who used to owned RTX 3050 4Gb, it's not gonna do any good in the long run, and another short story is the price of RX 6700 XT from where i'm from is the similar as RTX 3050 8gb version, so why shouldn't i pick the performance similar to 3070 with the price of a 3050? I'm on a budget here so i only grab what i can, and now i can use Blender without worries that my graphics card is going to combust, besides, Blender do support AMD so that's a bonus
@energeiai9910 ай бұрын
cool - thanks
@LevendeStreg10 ай бұрын
You're welcome. Thanks for watching.
@Aitrepreneur Жыл бұрын
Great Workflow!
@LevendeStreg Жыл бұрын
Thank you kindly. I’m awestruck 🙌
@traida111 Жыл бұрын
god damn you are made for tutorials lady, a breath of fresh air to find someone with passion and knowledge
@LevendeStreg Жыл бұрын
Thank you kindly, @traida111, I really appreciate it! ControlNet doesn't seem to work on SDXL though, just if you were wondering. Thanks for watching and commenting!🙌
@traida111 Жыл бұрын
@@LevendeStreg no worries :) i am using automatic1111, i find for me the colour stick men dont translate difficult poses. Tonight i plan to try using a reference image of photo with pose already in position, then i might try the canny line art, i will try select the processor then the button there can output a preview, so i can check it and adjust the stength until the canny lines are suitable. If i can get it accurate and easy process i will be happy. I tried milti control net, with openpose for body and depth map for hands and feet but i couldnt get it to work well
@edsonjr-dev10 ай бұрын
💜💜💜
@slashkeyAI Жыл бұрын
Happy to have found your channel.
@LevendeStreg Жыл бұрын
Thank you kindly. I really appreciate it. And thank you for watching and commenting 🙌
@munyunu Жыл бұрын
you are such a nice person thank you for the video
@LevendeStreg Жыл бұрын
Thank you kindly, Min Yunu🙌 and thanks for watching. I really appreciate it!
@ujjvalw2684 Жыл бұрын
That's something new . An Ai Artist .
@LevendeStreg Жыл бұрын
Hahaha. Yup precisely! 🙌
@JohnVanderbeck Жыл бұрын
The real problem with OpenPose in ConttrolNet is that it doesn't project in 3d properly. There is no perspective in the "Bones" that it uses so it has no way of knowing what depth they are in. It is a significant disadvantage, despite how amazing it is.
@LevendeStreg Жыл бұрын
Yes that is so well put. And that’s why I suggest the depth model in stead. Thank you for commenting and being articulate and to the point. I should have pointed that out more clearly in the video🙏
@MultiHeheboy Жыл бұрын
Stable diffusion itself doesn't understand or not trained like this. So it's possible to input 3d information.
@nicolasgabriel3402 Жыл бұрын
Have you found a way around it? I was thinking what is the best workflow to pose with some perspective or depth and I found this video and no so much more...
@thechosenone729 Жыл бұрын
This is how i imagine smart artist ... instead of constant complaining about how it's going to take your job taking it and use it to your own advantage to make it even better. Good video btw.
@LevendeStreg Жыл бұрын
Thank you kindly Peter. I really appreciate it. And thanks for watching. So glad you like the video🙌
@Smashachu Жыл бұрын
Huh.. I've never been much of an artist or really ever enjoyed drawing beyond trying it and being disappointed. However.. I love stable diffusion and i love everything about being able to take whoever idea i have in my head and transposing it on paper with with speech. Maybe i'm not a real artist but this.. has become an obsession
@LevendeStreg Жыл бұрын
Hahahha that's wonderful. And yes making images with AI is highly addictive😜... and what is a real artist anyway? It's so great to hear that you found your inner creative!
@erikdong Жыл бұрын
brilliant video. thank you for sharing!
@LevendeStreg Жыл бұрын
Thank you kindly. I really appreciate it. And thanks for watching.🙌
@peacefusion Жыл бұрын
you know shes crazy good when shes wearing a hand brace.
@LevendeStreg Жыл бұрын
Hahahahaha... It's a glove for my wacom cintiq pro 32 inch. Otherwise my hands sticks to the screen and the ruins the flow when I'm drawing.
@HieuTran-rl2qw Жыл бұрын
tks you alot for amazing tools!!!
@LevendeStreg Жыл бұрын
You’re very welcome! Thanks for watching!
@n8wn8wn8w Жыл бұрын
You are a generous soul. Thank you❤
@LevendeStreg Жыл бұрын
Thank you kindly - and thank you for watching. I really appreciate it 🙏
@thomasmann4536 Жыл бұрын
I love how excited people get about being able to pose inside Stable Diffusion and not having to use Blender. As a Technical Animator (the person who creates the rigs that allow you to move a 3d model around) I have no problem with using Blender and I just find the PoseX UI disgusting :D
@LevendeStreg Жыл бұрын
Haha. Thanks for commenting and watching. I totally get you point of view. I am not a fan of openpose and not of PoseX either. I prefer the canny and depth model😜 And ottally cool with a job where you rig characters🙌
@ramiropiedrabuena3892 Жыл бұрын
Design Doll works fine.
@LevendeStreg Жыл бұрын
Thanks for the recommendation for Design Doll. Good to know!🙌
@aaronhhill Жыл бұрын
Being a HUGE fan of Aitrepreneur I have also subscribed to your channel. Glad I ran across it! Thanks for your work.
@LevendeStreg Жыл бұрын
Thank you kindly - I really appreciate it. Thanks for subscribing and watching!🙌
@SaintMatthieuSimard Жыл бұрын
As the US copyright office decided that they would not register copyrights on unique material generated with AI, I decided I would not use it... But that's still interesting to learn about it. It's not so different than applying a sephia filter on a photo on instagram in terms of "it's still your own". They should review their position using common sense.
@LevendeStreg Жыл бұрын
I totally agree, Matthieu. And I think that it will change over time, when people understand the process of working with it.
@PADATWO Жыл бұрын
I have just discovered your channel which seems very helpful. But ineed to improve a few my english ! Thanks for sharing 😊
@LevendeStreg Жыл бұрын
Thank you so much for watching and commenting. Cheers🙌
@Neblina1985 Жыл бұрын
amazing video! i always love the way of your type of explanations
@mariaprohazka2730 Жыл бұрын
Thank you kindly, José. And thank you for commenting and watching🙌
@LevendeStreg Жыл бұрын
Thank you kindly, José. And sorry. I answered that one with my private account😬
@dodd15 Жыл бұрын
Your workflow is very clean and polished, best one I have seen so far.
@LevendeStreg Жыл бұрын
Thank you kindly. I really appreciate it. And thanks for watching🙌
@dougmaisner Жыл бұрын
cool stuff, i'm going to try these steps!
@LevendeStreg Жыл бұрын
Thank you kindly. Hope it works well for you. And thank you for watching 🙌
@charlesmartel3995 Жыл бұрын
Great news!
@LevendeStreg Жыл бұрын
Thank you kindly. And yes. Great news! 🥳
@lsycxyj Жыл бұрын
Still hard to get the hands fixed. Is there any models with detailed hands?
@LevendeStreg Жыл бұрын
I’ll try to look in on it. And then I’ll get back to you🙌
@AndysTV Жыл бұрын
Is it possible to generate a character but also its decomposed body parts? How do I setup the prompt? Or base image?
@LevendeStreg Жыл бұрын
Yes, that would be possible. That's a great idea for a video! Thank you for bringing my attention to it. You could use open pose for that. And just zoom in on the different body parts.
@zimnelredoran9985 Жыл бұрын
Hi, thanks for the great video, Controlnet is beyond fascinating :) Just a question, are the steps we see already done (like getting a pose done in the third tab of multicontrolnet) explained in another video? Another question, how do you send the just the hands to the controlnet tab? Thanks :)
@LevendeStreg Жыл бұрын
Hi there and thanks for your comment and question. I think I’ll be doing a video where I explain it in a little more detail. But basically you download the png file to your computer and then upload it to the tab. 🙌
@zimnelredoran9985 Жыл бұрын
@@LevendeStreg Thanks for the fast response : )
@sirusazadi2349 Жыл бұрын
Great video! Are these apps installable on a Mac?
@LevendeStreg Жыл бұрын
Thank you kindly, @Sirus. They're not really apps, but code bits. And yes you can install Stable Diffusion locally on a Mac. But there are some of the code - like the Dreambooth extension that doesn't work on a mac. That's why I use RunDiffusion for running the code on Cloud GPU. Remember to use the promo code (levendestreg15) for RunDiffusion if you sign up for Creator's Club. That will get you 15% discount.
@sirusazadi2349 Жыл бұрын
@@LevendeStreg thank you, really learning a lot from you
@danwood4171 Жыл бұрын
I came to find out how to use posex and saw no tutorial on it. I came create the openpose thing, and tell it to send it to control net and enable control net but I'm not getting my person like the pose. I've used openpose directly where the pose is created from an input image and that works but I can't get a posex created "skeleton" to work. But I saw a lot of hints that great power is available with "other" things which is nice.
@LevendeStreg Жыл бұрын
Thank you for your comment, Dan. You have to install the ContralNet extension, Depth Library and maybe the PoseX. Then download the ControlNet models and install them. And then in your Automatic1111 you go to settings, and then down under controlnet you switch on multi-function. This way you can add more ControlNet tabs to your images. It's a complex thing to get working. I'll see if I can do a new video on it.
@danwood4171 Жыл бұрын
@@LevendeStreg Thanks for the reply. I just needed to realize preprocessor needs to be set to None. The way I now explain it to others is: OLD WAY: Two steps: 1. CN image preprossesing -> depth map, stick figure, etc. 2. preprocessed result -> image POSEX WAY: 1. You pose the stick figure and then generate image without preprocessing. Of course, a demo is better than words. FYI, the overnight set of A1111 changes has broke posex and perhaps other things. The bugs have already been reported on github.
@LevendeStreg Жыл бұрын
@@danwood4171 Thank you for that Dan. Really appreciate you breaking it down for others to follow - I probably didn't make that clear. And yes, A1111 often breaks😬 - I'm using RunDiffusion though, and they have it fixed.
@geoatherton5214 Жыл бұрын
Love your videos Maria, thank you for all the hard work you put into making them. I agree with all your same conclusions like ‘I only really want to use AI tools if they are faster than just drawing it.’ I have a graphic novel project as a consistent hobby, so I appreciate hearing your perspective as you look at AI art tools through the lens of a comic book artist. If you’ve clocked the hours learning to draw figurative anatomy, that is faster and generally more fun/loose/enjoyable than painstakingly moving joints around in blender or open pose, which is way more trial and error and technical troubleshooting than the flow state of just drawing. What I love about AI art right now is drawing a loose sketch with a basic color palette and maybe 2 tone shading, then feeding that to SD or Midjourney and just having it layer on painterly detail in the blink of an eye (I’ve got some demos of this on my own channel). Huge time saver. This latest raft of Control Net AI tools are amazing in that they empower people who haven’t had all those drawing practice hours to express themselves and tell their own stories, but I’m just as excited for all the possibilities it unlocks for practiced illustrators to supercharge their existing powers and output volume while staying ‘in the flow.’ Keep ‘em coming, salutations from Seattle and looking forward to your next one. :)
@LevendeStreg Жыл бұрын
Oh wow. Thank you for watching, commenting and sharing your knowledge. I love the workflow you show in your videos. That is precisely what I’m talking about. Great use of AI. Can I connect with you on Discord? Would love to share thought on some of the processes…
@geoatherton5214 Жыл бұрын
@@LevendeStreg Definitely! Just sent a DM to your Instagram. Thanks again 🤘🖖🎨
@LevendeStreg Жыл бұрын
@@geoatherton5214 Thank you. Hooked up with you on Discord🙏
@pladselsker8340 Жыл бұрын
I love the energy you give. Great video!
@LevendeStreg Жыл бұрын
Thank you so much. I really appreciate it 🙌
@pritamsarkar238510 ай бұрын
how can i get consistent results? i.e same character wearing same outfit but different poses of my choice without any crazy/ugly deformation?...every time different results is frustrating me
@LevendeStreg10 ай бұрын
Yeah, that's that though part. The technology is not quite there yet. It's difficult to get consistent outfits - without the deformation. Try doing long negative promtps. That sometimes works. Good luck.🙌
@chariots8x230 Жыл бұрын
It’s really impressive how things are progressing with AI art technology. I like that you were able to pose your own custom character. Although the results are pretty decent, the problem with 3D a lot of the time is that it produces poses that look stiff. If the AI copies these poses exactly as is, then it will also produce stiff-looking characters. It would be cool if there was a setting to maintain the overall pose, but modify it very slightly to get rid of the stiffness to create a more natural-looking pose. I’m excited to see how this technology will evolve in order to create whole scenes. For comics, I’m waiting for features like: 1 - Natural looking poses (no more unintended stiffness) 2 - Posing multiple of my custom characters together in difficult poses (e.g., hugging each other, holding hands) 3 - Changing the facial expressions of each of my custom characters in the scene 4 - Using some nonhuman custom characters such as animals, creatures, and anthropomorphic characters 5 - Using custom backgrounds that can be shown from different angles, lighting conditions, time of day, weather conditions, etc. 6 - Combining my custom characters with a custom background in order to create a full scene Thank you for sharing this video 😊
@LevendeStreg Жыл бұрын
Thank you so much for your comment @chariots8x - I really appreciate the time you took to put together your comment. And I totally agree with you. There is a lot of stiffness to the 3D poses. I found that Stable Diffusion’s “translation” of the pose is not as stiff. At least not if you turn down the weight of the ControlNet pose. And as you say, the development of this is impressive. And in a couple of month it will be so much better. This is all so exciting to follow and learn. 🙌
@chariots8x230 Жыл бұрын
@@LevendeStreg Yes, it is exciting to see how this technology develops and the new things it can do. Maybe someday it will become suitable for complicated projects, like comics. ☺️ Hopefully, soon we’ll be able to create full scenes with multiple consistent characters & backgrounds. The ability to copy a pose and apply it to a single character is a great development. Now if we can apply interactive poses to multiple characters in a scene, and also give each character a different facial expression, that’d be great.
@LevendeStreg Жыл бұрын
@@chariots8x230 I think that is already doable with inpaiting. But you're right. Right now it's still easier / faster to do the drawing instead. But in a couple of month... I think this will move fast.🙌
@bumstudios8817 Жыл бұрын
I think they can already do that.. it’s just working out the bugs for now. They are able to replicate a image but keep the same style.. I have seen a video where they just used an image sequence for the animation and the ai did the work for the style look.. then put it together after in a video editor to make it a movie again..
@bumstudios8817 Жыл бұрын
P.s which means you could use an actual video or movie of real people moving eliminating the cgi animation stiffness
@obsolete.camper Жыл бұрын
Thanks a lot, I learn a lot from your videos. Your explanation is clear and easy to follow and please keep it up, I would like to learn more about techniques you use to make SD more effective to use. Regarding this tutorial, can you try to explain again STEP 5 when you mention about the "different" tabs in the Controlnet section. Are you trying to say turn on the multiple Controlnet tab features?
@LevendeStreg Жыл бұрын
Thank you so much. And yes, I should have explained that more clearly. Yes I Multi ControlNet switches on in the settings. But I actually meant that you also need to enable the different ControlNet tabs.
@emilianovb Жыл бұрын
Awesome video! The algorithm recommended this to me at the perfect time! I spent the last two days learning controlnet and improving my txt to image prompts. Quick question: Would this method work on multiple subjects ? (two or three 'people' in an image?) Thanks!
@LevendeStreg Жыл бұрын
Haven’t tried that out yet. I think I would add them in one image though. So combine three images into one. But yeah, I guess you could do it the other way. In my experience, though, 3 ControlNet tabs I tops. They mess up the code - it becomes wonky and unstable - and the output becomes a bit weird sometimes.
@LevendeStreg Жыл бұрын
And I’m glad you like the video. Thank you🙌
@alejandromarello6907 Жыл бұрын
Quiero hacer un trailer de 1 minuto de mi guión cinematográfico. Que programa o software me recomienda?!
@LevendeStreg Жыл бұрын
Well that sounds like a great idea. But a trailer can be quite a lot of work. You need to create a lot of footage. And even though AI exists - there is no easy solution. I would probably look into Next-Gen2. But there's a long waiting list there. And I don't know which style you want to do. You can do a lot of cool stuff with Stable Diffusion - but you need to render it frame-by-frame - but with a setu so you render many frames at a time.
@alejandromarello6907 Жыл бұрын
@@LevendeStreg thanks. i'll keep looking into it then
@djanevski Жыл бұрын
This is awesome! Thank you. Great quality video as well! Keep it going ! I will try and let you know
@LevendeStreg Жыл бұрын
Thank you kindly , Danny. I really appreciate it 🙌
@alisoncristinazertuchelope7332 Жыл бұрын
Hello, good video, but why don't I get the 3 net control windows? The ones that are Control Model - 0, etc. I have the net control version e1885108 (Wed Apr 12 03:24:32 2023) and the posex one (Sat Apr 1 12:28:38 2023) which is the new one.
@LevendeStreg Жыл бұрын
Hi there and thank you kindly🙌 - to get Multi ControlNet go to Automatic1111 >>settings - and look for ControlNet. Then slide the slider up to 3 controlnet tabs. Hope it works out for you.
@dongivafoc4493 Жыл бұрын
Hi, very insightful video. I am struggling to get consistent results on the poses. I have trained a model on a real person (me) and I'm generating a somehow similar appearance to the real person but you can still see that it's 2 very different people. it's very difficult to get the face and the poses consistently. Either the face will be acceptable or the pose but very difficult to get both of them. So I will be trying your solution, but I was wondering, since it's been 4 months now, if there was other tools I could use to get consistent results. I'm using stable diffusion and the installation of depth lib is making it crash. Is this something that doesn't work anymore or just an issue on my side?
@LevendeStreg Жыл бұрын
Depth library is difficult and crashes a lot. That’s why I use RunDiffusion for it. You need multi ControlNet - and then The Roop extension. I did a video on that too. And maybe use the new upscaler. They’re cool and you can sort of upscale like a puzzle. I’m gonna do a video on it soon. Hope it works out for you! 🙌
@Grapegum Жыл бұрын
Who messed up the timeline to the point Sarah Connor is an AI artist? Jokes aside, this is one of my favourite AI channels ever, keep up the good work!
@LevendeStreg Жыл бұрын
Hahaha. Thank you so much. I really appreciate it 🙏 thanks for watching!
@kinlih289 Жыл бұрын
helpful but chaotic
@LevendeStreg Жыл бұрын
Glad you liked it and thanks for watching. Sorry for it being a bit chaotic. My brain is a bit chaotic!
@kaiandree Жыл бұрын
Cool! Thanks! As being an illustrator, I wonder wether it wouldn’t the easiest way for me to just scribble the character.. can you please make a video about that way to produce photolike images?
@LevendeStreg Жыл бұрын
Thank you kindly. I really appreciate it! And yes I will try to make a video on that. Thank you for your request!
@mitamoto3399 Жыл бұрын
Love this, thank you for so many amazing tips! However, I did find myself getting a little sidetracked at times as you jumped from one topic to another pretty quickly. It would be really helpful if you could provide a more step-by-step tutorial on how to insert images into Posemy art instead of just showing screenshots of how to do it.
@LevendeStreg Жыл бұрын
Thank you kindly for you comment and thank you for watching. I have an upcoming video on what you request. An in depth video on posemy.art 🙌
@mauricioc.almeida2482 Жыл бұрын
I would like to express my sincere gratitude for this amazing video tutorial. Your help was invaluable and allowed me to make significant progress in my hobby as AI creator design. I was very impressed with the clarity and quality of your tutorial, which made it easy for me to understand concepts that were previously confusing. Thank you again for sharing your knowledge and skill with us. Your generosity is greatly appreciated, and I look forward to continue learning from you in the future.
@LevendeStreg Жыл бұрын
Thank you kindly, Mauricio. I really appreciate it 🙌 thank you for watching and taking the time to comment.
@user-qr4jf4tv2x Жыл бұрын
i can't believe its not blender *slogan
@LevendeStreg Жыл бұрын
Hahaha.... Right! It's pretty cool!
@ellenripley4837 Жыл бұрын
I have another inquiry. does this method works with images you generate elsewhere like midjourney? I don't like the results stable diffusion gives me when it comes to style and I have nailed a Gouache cartoons style on midjourney that looks like something I would do but I was wondering how good this is with a more stylized illustration.
@LevendeStreg Жыл бұрын
Thank you for your question. To my knowledge you cannot achieve the same results with Midjourney, no. Only Stable Diffusion can be controlled like that.
@hogyodoshin Жыл бұрын
Hi, I am on Windows, Have installed posex on 2 different PC but the 3D pane and controls are not there: any clue what happens ?
@LevendeStreg Жыл бұрын
Don’t know sadly. I’m on a Mac - so I haven’t had the best results with local installs. Have you checked the settings and updated?
@4538304544 Жыл бұрын
same problem here, any solutions?
@LevendeStreg Жыл бұрын
No, sorry to say so@@4538304544 . It's probably that the code is broken yet again. Or that your graphics card isn't big enough. You need a Nvidia (pretty big card) to get 3D to work. I'd recommend checking for updates. Delete the old install and do another install. Sometimes the code is updated, but that's ends up causing bugs in many of the extensions. It could also be a A1111 bug (that happens very frequently). Maybe try out RunDiffusion.com (they have all the newest installs on that - even comfu UI). Remember to use the promo code levendestreg15 to get 15% off on Creator's Club (you need that to run SD with Multi ControlNet). Hope it works out for you!🙌
@darth_sidious_sheev_palpatine Жыл бұрын
Can this method work with two actors? If I wanted to always have two people fist bumping on a single promt, or have two actors crossing light sabers in a fight image?
@LevendeStreg Жыл бұрын
Yes, but you have to create an input photo for ControlNet using an image with two people.
@Alemmo-x5l7 ай бұрын
Hi, I installed everything but the "depth library" tab doesn't appear. How should I proceed about this?
@djfremen Жыл бұрын
Looks like you're running on OS X. So if it's apple silicone (m1, m2) then you are forced to use RunDiffusion only you have the earlier intel version / eGPU, correct? Somehow I think these heavy extensions will not work leveraging CPU and RAM only...
@LevendeStreg Жыл бұрын
Correct. I’m running on Mac m1. And you’re right - on RunDiffusion I’m using the Large server. The medium cannot run them - or sometimes they’re buggy at least. I am looking into getting a pc with a Nvidia (maybe rtx 4070ti, but not sure yet. This is all moving so fast. Who knows what the requirements will be in a couple of months… so right now RunDiffusion works well for that.
@djfremen Жыл бұрын
@@LevendeStreg good idea. I've seen people repurpose old crypto mining rigs from alienware aurora r12 with the rtx 3090. Seems like a cheaper way to go than building a whole new PC.
@krz9000 Жыл бұрын
Get a 4090...if you use this to make money there is really no reason to not want to have the best available. Especiwlly with something that is at the core of your process.
@djfremen Жыл бұрын
@@krz9000 I still think putting everything inside a spreadsheet and calculating the cost vs buying hours on run-diffusion. R12 3090 with CPU/RAM 1k, 4090 new 1.8k for only the card!
@LevendeStreg Жыл бұрын
@@krz9000 hahahaha… yeah you’re right. But they’re so expensive 🤯
@CadyCadwell Жыл бұрын
max res i got while using controlnet on 6gb vram is just 256, can't even do 512. any workaround?
@LevendeStreg Жыл бұрын
Nope sorry, it's crazy heavy on your computer and GPU to do ControlNet.
@neilrhodes3879 Жыл бұрын
The screne gigging up and down is a nightmare to follow and not sure if you are trying to draw attention to something or if you are just wiggling your mouse, im a noob with this so could not understand what i was looking at as your SD screen is moving upo and down all the time
@LevendeStreg Жыл бұрын
Sorry for the inconvenience for you. I'm learning all the time too, both with AI and with making learning videos. Thanks for watching.
@MrFedemoral Жыл бұрын
Hi Levende!, i have to ask again, i cant get preview frames and progress bar working in GUI. I tried everything reasonable. I checked all possible settings imao. Obviously applying and reloading GUI sooo many times. image finally appear when render is complete, but i cant see anything in the process, also IM DESESPERATED as i tried everything. Any idea why it doesnt work?
@LevendeStreg Жыл бұрын
Hi Morality. I’m not sure I can help. Maybe it’s a settings thing - or maybe your install is broken. Working with ControlNet takes a lot of gpu up - and if your graphics card is not big enough it will produce black or weird frames.
@hagardeviking9314 Жыл бұрын
can I export to SL?
@LevendeStreg Жыл бұрын
What do you mean with SL?
@hogyodoshin Жыл бұрын
Does not work anymore cause of the last stable diffusion update update..
@LevendeStreg Жыл бұрын
Well, as far as I know it works on RunDiffusion. There’s always a bit of trouble with the updates😅
@SPYBGToolkit Жыл бұрын
I'll make sure he sees the video ;)
@LevendeStreg Жыл бұрын
Hehehe - thanks. I’d be honored! I’m a hard core fan 🙏
@SPYBGToolkit Жыл бұрын
@@LevendeStreg by the way here is something that you may be interested in, since you do art as well :) give it a try let me know does it work for you kzbin.info/www/bejne/fmWUY41sg9tsrKM
@LevendeStreg Жыл бұрын
Whoops - answered you with my person YT. Sorry about that. The video you shared is great - and a great idea too. Thank you so much for sharing this with me!🙌 by the way - I love this video you did: kzbin.info/www/bejne/n6TQiKGfnrlrm9k - I'm gonna try do show some of the same. But done in a different manner. Love your videos! And now subscribed to you!
@nvadito7579 Жыл бұрын
When I open and close the PoseX window (a few times), the interface freezes and no button responds to clicks. I have to restart the web UI to unfreeze it. Any solution?
@LevendeStreg Жыл бұрын
Sorry to hear it. Sometimes the multi ControlNet just freezes up for me too. Normally when I don’t have enough GPU that can happen.
@roberthintz4017 Жыл бұрын
This should be extended to animals.
@LevendeStreg Жыл бұрын
You’re right. That might work too.
@crashdummyglory8 ай бұрын
the video should start from the 9:18 mark tbh.
@沙漏下的足跡 Жыл бұрын
我找時間來學習這個功能。
@LevendeStreg Жыл бұрын
I'm glad you learned from this. Thank you for watching 🙌
@josevieira3367 Жыл бұрын
:)
@LevendeStreg Жыл бұрын
Thanks José! And thanks for watching!
@ellenripley4837 Жыл бұрын
isn't it better to get one of those really good dummy dolls with ball jointed articulations to create poses, then take a picture then edit it in photoshop and voila! I find the rigged 3d doll so annoying to use.
@LevendeStreg Жыл бұрын
Yeah, I agree. I actually have some small dolls too… that you can use. And I’ve wanted to do an episode on that for some time now. So thanks for this question! 🙌
@johnsmith4402 Жыл бұрын
If so complicted like this, what is the point to use AI.
@LevendeStreg Жыл бұрын
That is a really great question and good point. I think AI has been harnessed by huge companies like Disney (and thereby Pixar too) already. And with great skill they can produce animated videos with it already. But for small users like me and you I’m afraid the gain of using AI for image creation is not that big. The computers and graphic cards need to be huge. And we cannot - as small users - save time using it. It’s complicated to use, the code breaks all the time and it’s still way easier to do by hand most of the time…
@genin69 Жыл бұрын
at around 8:07 is it possible to rather have a small floating head instead of that massive 16:9 video overlay on the actual workspace. i feel it covers a lot of what needs to be seen and is also distracting..
@LevendeStreg Жыл бұрын
Thank you for your comment. I’ll see what I can do in future videos! 🙌
@eod9910 Жыл бұрын
do not do this on your computer locally because you will fuck up your computer and have to reinstall stable diffusion all over again you will lose all of the extensions that you installed
@LevendeStreg Жыл бұрын
Yup, that’s why I use RunDiffusion for it. This way all extensions are valid and stable! 🙌
@TeleMA50c Жыл бұрын
This video contains a lot of good information but I found it hard to follow. You repeat yourself a lot and don't walk through the process in a linear way but rather a meandering, out of order sort of way. Not to sound ungrateful but if you were able to edit the video to be much clearer and more linear it would improve it tremendously.
@LevendeStreg Жыл бұрын
Hi there - first of all thanks for watching and commenting. I really appreciate it 🙌 secondly, you are quite right. I’m a creative spirit and my brain jumps around a lot and it’s a complex subject about ControlNet. I’m trying all the time to improve, so I am already trying to follow a more linear approach in my videos. 👍
@St.MichaelsXMother Жыл бұрын
are there any Like FREE LIGIT Lineart converting images on google you can Link me?
@LevendeStreg Жыл бұрын
What a great question. The only one I was able to find with a quick search was this one: tech-lagoon.com/imagechef/en/image-to-edge.html - but I'll look into it and get it into one of my next videos. Cheers 🙌
@LevendeStreg Жыл бұрын
But Adobe Illustrator is really the best for that. I use that feature all the time!
@thevoid6756 Жыл бұрын
I am surprised an illustrator like yourself has to use clunky tools like openpose or learn basic 3D tools to get AI to pose a character correctly. I would imagine it shouldn't be too dificult to basically let artists freehand a sketch with img2img and add text to refine the prompt.
@LevendeStreg Жыл бұрын
That's a great comment. And you're right. But - I'm testing out what gives the best results. And I have to admit the 3D poses actually gives me better results right now. I still haven't gotten my process down to what I want it to be. In the beginning I thought that I would get the best results with what you describe. Img2img and ControlNet. But to my great surprise that wasn't the case. I need to have at least the colors in place too on my sketch for that to work. And also I find that I get the best outlines on my comic book images if I use 3D poses instead of sketches right now. But again - I need to play around some more with ControlNet, because I'm actually really keen on using the Scribble model. Still I'm getting better results with Canny and a 3d model image right now.
@OnionCurry Жыл бұрын
Filtering stolen art by similar poses! So cool!
@LevendeStreg Жыл бұрын
I'm not sure I understand what you mean. How is it filtering stolen art? I have a great conversation and colab with posemy.art (if that's what you're alluding to). Otherwise can't see how it's stolen? I'm an artist myself, I do most of my work by hand, but I also use AI where I can. I'm trying to tweak my work process and to help other artists do the same.
@kurwa_bmw Жыл бұрын
11 minutes ouf of 12 are just about promos and the rest is full crap
@LevendeStreg Жыл бұрын
Thank you for leaving a comment and thanks for watching.
@speeddemonau6802 Жыл бұрын
I couldnt watch to the end as I found it tedious, especially the excessive banter. I also found it lazy that instead of filming and editing the complete process, you tacked on something at the end. Given you announced the 'additional content at the end' at all stages of the video, it seemed to be a design choice to force watching to the end of the video. Provide content worth watching and I will watch every second...
@LevendeStreg Жыл бұрын
Thank you for your feedback. It takes forever for me to learn all the new stuff AND do these videos - even though it doesn't seem so for you. My brain jumps around a lot - and I cannot remember a script. So that's why the excessive talk is there. And I cannot cut it out - because part of the background video (with Auto1111 UI) will be missing. A video like this one I probably put at least 15 hours into with research and filming and editing. It's very hard to do as I run a small visual agency full time. And this is just something I do for interest and for teaching others - I don't really make money on the videos as such (not even to cover costs). But thank you for watching.
@kiv1198 Жыл бұрын
AI 'Creators, Designer, Engineers' what a bunch of entitled words just to avoid saying "AI User". Just accept what you are and stop using real effort titles.
@LevendeStreg Жыл бұрын
Well Thank you for your comment. AI user is only very little of what I do. I’m an animator and illustrator - and I run a small visual agency. So that’s actually what I spent most of my time on - not AI. It’s a very small part of my workflow.
@ChrisVirgilio10 ай бұрын
Over 3 and a half minutes in before you finally introduce yourself and tell us why we should even bother listening to you? Gosh.
@Ia960-s4l Жыл бұрын
No artist you are just crawling other’s art and printing it randomly. Ok, theft.
@LevendeStreg Жыл бұрын
Thank you for your comment. And nope I’m NOT - that’s not how AI works. I use AI to speed up my creative process at times. And at times it easier and quicker to just draw everything myself. You would know if you’d spend the time learning about AI diffusion models and if you’d seen more of my videos. I started out as an illustrator and animator. I still do lots of live graphic recordings and company visualizations, hand drawn comics and animation. And I continue to learn about new and evolving stuff that comes to the creative scene and helps visual artists. Hope that you have a great Sunday 🙌