Corridor Crew Workflow For Consistent Stable Diffusion Videos

  Рет қаралды 62,872

enigmatic_e

enigmatic_e

Күн бұрын

Пікірлер: 123
@dreamzdziner8484
@dreamzdziner8484 Жыл бұрын
I am happy to see someone who is still exploring all the possibilities on getting the perfect consistent animation. Thank you for explaining everything clearly. Hopefully we will soon have an extension for getting consistent animations on SD.
@FortniteJama
@FortniteJama 11 ай бұрын
Love the way you always cover the multiple variables and the frustration or patience they require to master. Almost requesting that video, just a vid of total frustration, cause I know it happens.
@yonderboygames
@yonderboygames Жыл бұрын
Thanks for creating this breakdown. I've been wanting to do a deep dive into a way to stylize some of my 3D animations using stable diffusion and didn't know where to even start. I'm subbing!
@elimaravich722
@elimaravich722 Жыл бұрын
a great tip when using LORA for the captioning sequence: add "Dudley" to the "Prefix to add to BLIP caption" and it will apply it to every text file so you dont have to go in and add it to all of them.
@THAELITEVR
@THAELITEVR Жыл бұрын
love your analytical approach to getting shit done, great content
@clenzen9930
@clenzen9930 Жыл бұрын
Great video, no complaints. For some people, extra steps might help: use SD to make the 3d renders less 3d renders. Different outfits would increase LoRA flexibility IF that was important. So many variables. Again, that’s for sharing all of this.
@lucianamueck
@lucianamueck Жыл бұрын
I love, love, love your channel. Congratulations for your job!
@nibblesd.biscuits4270
@nibblesd.biscuits4270 Жыл бұрын
Great tip on the fusion render speeding up the overall render. I’m new to resolve and it really did make a world of difference.
@IntiArtDesigns
@IntiArtDesigns Жыл бұрын
This is such a wicked tutorial! Thanks bro!
@LoneRanger.801
@LoneRanger.801 Жыл бұрын
Excellent. Thanks for all the description and details.
@enigmatic_e
@enigmatic_e Жыл бұрын
No problem!
@JuliousNiloy
@JuliousNiloy Жыл бұрын
Man! This one was packed with information
@Daxviews
@Daxviews Жыл бұрын
Hey, just wanted to say thank you for this helpful guide! I followed it step by step (i had some problems with controllnet, it only gave me one tab instead of multiple tabs) and so far my outcome looks pretty decent! Even without the deflicker effect in davinci resolve studio!
@enigmatic_e
@enigmatic_e Жыл бұрын
Hey! So to get multiple tabs go to setting, controlnet and there should be an option to add multiple controlnets. Change the amount and go to apply and restart your. Should have more after that.
@Daxviews
@Daxviews Жыл бұрын
@@enigmatic_e Wow thanks a lot! Just found it and changed it. I hope you will continue making such great videos :D
@COAgallery
@COAgallery Жыл бұрын
Bro. You rock. What a great video. Thank you for taking your time to create this, your work is clean. Subscribed!
@UndoubtablySo
@UndoubtablySo Жыл бұрын
great guide, the possibilities are really exciting
@iamYork_
@iamYork_ Жыл бұрын
Im working on my top AI channels video to pass on my subscribers, as I have retired from the field for now... and you have definitely made the list... Just skimmed this video but you definitely go over all sides of it... from blender to mixamo to i dont even know some of the sites you're using... you are going deep on it... Great job my friend... Keep up the good work... I definitely recommend your channel to anyone who wants to get into generative AI work...
@enigmatic_e
@enigmatic_e Жыл бұрын
Thank you good sir. 🙏
@judgeworks3687
@judgeworks3687 Жыл бұрын
I’m one of common sense subscribers (&enigmatic). Have learned sooooo much from both of you, thank you.
@iamYork_
@iamYork_ Жыл бұрын
@@judgeworks3687 thank you... I still have a lot of knowledge to pass on but currently just obliged by too many professional time constraints to upload weekly, especially on the tutorial side of it all as a typically tutorial for me can take between 20 to 40 hours to create... Enigmatic has the crown right now in my opinion... For both beginners and more experienced users that dabble in other softwares... He blends them all together... Great person and talented as well... When anyone asks me about other channels to check out when it comes to generative AI for creative purposes... Enigmatic is the first channel I always recommend...
@Brespree23
@Brespree23 Жыл бұрын
For the level of quailty that Corridor Crew had for their edit, is it necessary to make the model in the same way or can i get the same quality following your workflow. Cus I'm hoping to get very unique models to each man/person i put into it.
@Corruptinator
@Corruptinator 11 ай бұрын
I think what could work is that you could draw pupil-less/iris-less eyes, as in all "white" eyes so then in post-edit you can animate the pupil/iris in the eyesocket for more consistency.
@firasfadhl
@firasfadhl Жыл бұрын
Flicker Free after effects plugin is giving me a good results . I put the Slow Motion 2 preset and active the motion compensation. I don't like the idea to install a hole software for a deflickering effect. If anyone would do a comparison between the two . To see if there is a big difference. Then i maybe consider it if it's really better 🤣.
@enigmatic_e
@enigmatic_e Жыл бұрын
I’ll try it and see!
@FunwithBlender
@FunwithBlender Жыл бұрын
great vid, you added a lot extra from the corridors video well done
@drinkinslim
@drinkinslim Жыл бұрын
I'm amazed at the number of people, such as enigmatic_e, saying "anyways" instead of anyway. I don't know if I'll ever get used to it. (Random comment of the day.)
@enigmatic_e
@enigmatic_e Жыл бұрын
lol habit i guess. I've never been a good speaker or writer. My years spent going back and forth between Mexico and US probably didn't help
@cafefresh123
@cafefresh123 Жыл бұрын
I love your videos! And thanks for the work in helping us understand how to easily create with hella tools :) Cheers from San Francisco!
@enigmatic_e
@enigmatic_e Жыл бұрын
No problem! Bay Area, nice! I’m from San Jose but now living in Germany. Cheers
@jasoncow2307
@jasoncow2307 Жыл бұрын
3D tracking background is awesome,wish to see a lesson
@yobkulcha
@yobkulcha Жыл бұрын
DaVinci Resolve's Magic Mask feature allows you to easily separate objects from their backgrounds.
@Finofinissimo
@Finofinissimo Жыл бұрын
Amazing flow, man. Pretty neat tricks.
@klaustrussel
@klaustrussel Жыл бұрын
Absolutely great video!! I was thinking about using ebsynth but this method seems really fun!! Cheers
@831digital
@831digital Жыл бұрын
+1 for the 3d tracking tutorial
@plamen2110
@plamen2110 Жыл бұрын
Omg bro! It’s you! 🤯🤩
@enigmatic_e
@enigmatic_e Жыл бұрын
Yeah bro! 😂
@adrienberthalon6013
@adrienberthalon6013 Жыл бұрын
Awesome workflow thx you so much for making that king of video ! I'm having some troubles with reverse stabilization. Everything is working just fine until I press "Unstretch" on CC PowerPin. Then my footage (the face of my character) looses it's scale and become too small, it also "cuts" its own frame while moving (looks like a precomposed object that is cropped because it goes out of frame)... Any ideas of what might goes wrong in here ? Thanks a lot ! 🙏
@NarimanGafurov
@NarimanGafurov Жыл бұрын
Thank you bro!
@legacylee
@legacylee Жыл бұрын
Runway ML has a handy dandy AI background removal, I personally haven't tried it yet but having used roto brush 2, I think AI just made roto-ing not a pain in the 4$$ lol
@annashpitz8415
@annashpitz8415 Жыл бұрын
Thanks man! you're the best! I really need this for a school project I have coming up, and you've been a life savior! did you ever end up making that video about 3d tracking? I need to add my 3d designed objects to the video. could you point me to some info on that please? or even better a link to your video if you made it.. holding my fingers crossed! thank you!
@syno3608
@syno3608 Жыл бұрын
Thank you Thank you Thank you Thank you Thank you Thank you Thank you
@bigvince4672
@bigvince4672 8 ай бұрын
Using Mocha Pro is a cool idea.
@trappist95
@trappist95 Жыл бұрын
About the Lora training, it doesn't matter if the artwork of the character you're trying to train is in different styles or not, it will all translate over.
@enigmatic_e
@enigmatic_e Жыл бұрын
Thanks! Good to know.
@Warzak77
@Warzak77 Жыл бұрын
i had the same bug with da vinci studio, it was, either 5s ou 2hours of rendering, i was mad,thank you for the tips
@enigmatic_e
@enigmatic_e Жыл бұрын
No problem, it was driving me crazy too!
@danaetcg
@danaetcg Жыл бұрын
thank you!
@judgeworks3687
@judgeworks3687 Жыл бұрын
This is so helpful, thanks. Two questions: one: could you pull stills from the cosplay video, alter the stills and then use that for training? Two: is the training only for humans or could I take charcoal drawings I’ve made and train the Lora on drawing style? No figures, just technique and ‘look’
@funnyknowledge7251
@funnyknowledge7251 Жыл бұрын
Great video, super helpful I have a question, whenever I batch using control net, it only produces one frame from the directory I set despite having 200 images Any thoughts on how to fix this?
@yassiraykhlf5981
@yassiraykhlf5981 Жыл бұрын
very useful thanks
@moon47usaco
@moon47usaco Жыл бұрын
Yes please. Vids on 3D tracking. Thanks man. +]
@andreyzmey
@andreyzmey Жыл бұрын
Amazing! Is there any chance you can do a video about the same flow in davinci resolve instead of after effects?
@SageGoatKing
@SageGoatKing Жыл бұрын
3d Tracking video would be cool!
@SevenDirty
@SevenDirty Жыл бұрын
when I go to the github page I cant seem to find the commands you copied to use in powershell when installing kohya (time 16:15). Has it changed or am I confused?
@Ghost-wn9cf
@Ghost-wn9cf Жыл бұрын
Wouldn't running optical flow tracking on the original video, then applying that as a transform backwards and forwards with blending on the generated video smooth things out? I have no idea if something like that was attempted, or how to actually implement it, but I have a feeling it would be nice :D
@themightyflog
@themightyflog Жыл бұрын
Yes -lease talk about 3D tracking.
@FleischYT
@FleischYT Жыл бұрын
What to change on your config if using a RTX4090 (24GB VRAM) & 16core cpu ?
@enigmatic_e
@enigmatic_e Жыл бұрын
Not sure what would change. Maybe you could make the resolution bigger in Stable Diffusion.
@federicogonzalezgalicia3041
@federicogonzalezgalicia3041 Жыл бұрын
Hi, i get this everytime I try to generate an image from text: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' any solutions? (I have a imac 64GB RAM btw) Thank you very much
@VouskProd
@VouskProd 9 ай бұрын
Great video man! 👏Full of super useful info.👍 Excellent tip about stabilizing the face.👌Thanks a million.🙏🙏🙏 Still I'm facing a process issue. How do you get the images out of img2img or deforum to have an easily "keyable" background? Even when I put a sequence with flat green background as input, Stable Diffusion (img2img or deforum) draws elements on the background and apply a dull color, I can't key it afterwards in AE. I tried many prompts but with no luck. At 1:44, we can see that your output image has a plain green background, what smart sorcery did you use to achieve this?
@enigmatic_e
@enigmatic_e 9 ай бұрын
This video is quite outdated unfortunately. A lot of techniques are not necessary anymore with Animatediff. I have two videos about it on my channel.
@VouskProd
@VouskProd 9 ай бұрын
Thanks, I will check that. Comfy seams great but it's a new world to install and learn (looks like too time-consuming for me now during my current project 😅)
@enigmatic_e
@enigmatic_e 9 ай бұрын
yea totally get that. I wouldnt switch if youre in middle of something.@@VouskProd
@VouskProd
@VouskProd 9 ай бұрын
@@enigmatic_e Yup, but still, the more I look at ComfyUI, the more it draws me in, even in the middle of a project.😅 Anyway, you've already saved my life with the face zoom trick, which worked perfectly in my case 🙏And for my not-so-green background on SD output, well, rotobrush2 is my friend 🙂
@EditArtDesign
@EditArtDesign Жыл бұрын
Where should these setting lora be located and how to use them is not very clear?? Thank you in advance!
@Disorbs
@Disorbs Жыл бұрын
when i added the green screen video and did the tracking in AE and once i exported it as jpeg mine shows the green screen still how did you remove that to come out black background instead?
@enigmatic_e
@enigmatic_e Жыл бұрын
You would have to use a color key to remove the green.
@RaziqBrown
@RaziqBrown Жыл бұрын
please make the AE+Blender 3d tracking video
@enigmatic_e
@enigmatic_e Жыл бұрын
Working on it! 😉
@bot2.078
@bot2.078 Жыл бұрын
I have the model "sd15_hed.pth" but the processor, when using it I don't see the "hed.yaml" any suggestions anyone?
@Jagaan7972
@Jagaan7972 Жыл бұрын
The sd makes my background different, because of this I can't remove the background in after effects, what could it be?
@Statvar
@Statvar Жыл бұрын
Is there a way to save your stable diffusion settings? Like the Noise multiplier for img2img? Also thanks for this in depth tutorial :D
@bigdaveproduction168
@bigdaveproduction168 Жыл бұрын
Okay and just to know : it's not possible to have her originality method using in their original tutorial anymore ?
@razvanvita6548
@razvanvita6548 Жыл бұрын
For a better result u should have used the main prompt to describe what u want from mha
@enigmatic_e
@enigmatic_e Жыл бұрын
Thank you for the advice. This video however is quite outdated now. There’s different methods that give way more consistent results now.
@joeighdotcom
@joeighdotcom Жыл бұрын
would love to see how you use blender :D
@soyguikai
@soyguikai Жыл бұрын
Recuerda redireccionar a los nuevos a videos introductorios que ya tienes, por ejemplo el mas reciente de como instalar SD.
@SoniCexe-xq1uy
@SoniCexe-xq1uy Жыл бұрын
Podrias decirme la configuracion de tu PC?
@enigmatic_e
@enigmatic_e Жыл бұрын
Tengo 3080 10gb
@klimpaparazzi
@klimpaparazzi Жыл бұрын
No where in the description you mentioned how to download the JSON files.
@enigmatic_e
@enigmatic_e Жыл бұрын
under LoraBasicSettings.json: there is a link to download
@pmlstk
@pmlstk Жыл бұрын
put a pastebin or something for the prompts man
@Dreamy_Downtempo
@Dreamy_Downtempo Жыл бұрын
i can't get Lora to work the installation guide is completely different now on git
@GoodguyGastly
@GoodguyGastly Жыл бұрын
Same here.
@AlinkBee
@AlinkBee Жыл бұрын
@@GoodguyGastly x3
@RHYTE
@RHYTE Жыл бұрын
why don't you use deforum for this?
@enigmatic_e
@enigmatic_e Жыл бұрын
Does it give different results?
@RHYTE
@RHYTE Жыл бұрын
@@enigmatic_e It should give more consistancy because the last frame is fed in to generate the next. However for me it doesn't seem to work as well with controlnet at the moment.
@BKLYNXGAMING
@BKLYNXGAMING Жыл бұрын
Can this work for Mac?
@tanyasubaBg
@tanyasubaBg Жыл бұрын
Amazing stuff. Unfortunately, I don't have Nvidia. So I can't try anything that you share. Do you have some suggestions for people who use an AMD card? Thank you in advance.
@enigmatic_e
@enigmatic_e Жыл бұрын
Might have to go with google colab and use it through there. I want to get I to that and try to make a video for people in your situation.
@tanyasubaBg
@tanyasubaBg Жыл бұрын
@@enigmatic_e thanks it would be great
@judgeworks3687
@judgeworks3687 Жыл бұрын
This woman’s tuts are great too and she covers using runPod and how to run SD when you have old or computers (she doesn’t run SD on her computer). I don’t know if the LORA and training works but seems like it would…kzbin.info/www/bejne/Y169YWatl6mjldU
@BrunodeSouzaLino
@BrunodeSouzaLino Жыл бұрын
SD should work with AMD cards with ROCm support and PCIe 3 atomics. Don't expect much in the way of support, as most people think CUDA is the only framework that exists.
@musyc1009
@musyc1009 Жыл бұрын
How did you get multiple tabs for controlnet ??
@enigmatic_e
@enigmatic_e Жыл бұрын
Go to settings and then control net and I think there’s settings to add controlsnets
@musyc1009
@musyc1009 Жыл бұрын
@@enigmatic_e got it ! thanks for the instructions , and keep up the good work with the vids, you helped A LOT
@AI数字人
@AI数字人 Жыл бұрын
Is there any real-time software that can implement AI technology like this
@enigmatic_e
@enigmatic_e Жыл бұрын
Not at the moment. Runway is getting close
@AI数字人
@AI数字人 Жыл бұрын
@@enigmatic_e Thank you very much, looking forward to a real-time tool, I think when the time comes to use live, should be very interesting
@vigamortezadventures7972
@vigamortezadventures7972 Жыл бұрын
i subscribed to the corridor crew and didn't go into depth as what is being said here .. not to discourage anyone but you may not find the answers you seek in the subscription..
@enigmatic_e
@enigmatic_e Жыл бұрын
Do you mean you did the paid subscription with them?
@bigdaveproduction168
@bigdaveproduction168 Жыл бұрын
Yes I know what do you mean, now with the evolution of stable diffusion corridor's tutorial seems to be obsolete now
@BrunodeSouzaLino
@BrunodeSouzaLino Жыл бұрын
This whole workflow is beyond most budgets. I don't think most small studios or individuals have the know-how and funds to create their own AI algo specific to a curated dataset of expected results, then have enough computing power to train said dataset to satisfaction in a timely manner, record video with the correct settings and repeat a series of complicated conversion steps and cleaning on a frame by frame basis using several pieces of software until the whole process is done. It's important to note that the vast majority of artists are not technical people and know very little, if any programming, even if said programming is related to their craft. Couple that with the fact that SD is in constant development and has non-existent documentation and you have a workflow which would be slower than doing the whole thing yourself to the same level of quality (keeping in mind most of the cleaning you have to do in the outputs will be already integrated in the result by the animator).
@aminebelahbib
@aminebelahbib Жыл бұрын
It looks bad
@enigmatic_e
@enigmatic_e Жыл бұрын
You look amazing ❤️
@aminebelahbib
@aminebelahbib Жыл бұрын
@@enigmatic_e I know that but thanks ♥️
@Immortal_BP
@Immortal_BP Жыл бұрын
i cant help but feel bad for all the animators in japan who make less than minimum wage. i think they will be replaced by AI in the next 10 years
@Trivia2023
@Trivia2023 Жыл бұрын
Good job
@zhexiang8952
@zhexiang8952 Жыл бұрын
so complex😅😅
@enigmatic_e
@enigmatic_e Жыл бұрын
Sorry about that 😅
@dinah6956
@dinah6956 Жыл бұрын
Is there a way you can create an realistic image from your own background? and turn it into a 3d image? im new to all of this
@enigmatic_e
@enigmatic_e Жыл бұрын
Mmm I don’t think that’s possible at the moment.
@msampson3d
@msampson3d Жыл бұрын
Always happy to find another person to subscribe to that is making high quality, easy to follow, technical videos on Stable Diffusion!
@NewMateo
@NewMateo Жыл бұрын
can you do an updated video on warp fusion? Their new version is much better and way more smoother!
@enigmatic_e
@enigmatic_e Жыл бұрын
I know, I was hired to help with it 😁
@NewMateo
@NewMateo Жыл бұрын
@@enigmatic_e Ahh sorry! 😅 Well you did an incredible job! that warp fusion tech is crazy good!
@enigmatic_e
@enigmatic_e Жыл бұрын
@@NewMateo 😂 all good. Will probably do an updated tut soon
Infinite Zoom in Stable Diffusion A1111
11:23
enigmatic_e
Рет қаралды 19 М.
Players vs Corner Flags 🤯
00:28
LE FOOT EN VIDÉO
Рет қаралды 63 МЛН
How To Get Married:   #short
00:22
Jin and Hattie
Рет қаралды 19 МЛН
Stable Diffusion IMG2IMG Settings Pt. 2 (Consistent Animations!!)
11:19
Consistent AI Characters in any pose: Tutorial
25:02
Prompt Muse
Рет қаралды 392 М.
Comfy UI + Advanced Live Portrait + Stable Fast 3D = 3d Blendshape Library
2:34
Generate 3D Sets for your Short Films!
15:52
Mickmumpitz
Рет қаралды 178 М.
WOW! NEW ControlNet feature DESTROYS competition!
9:08
Sebastian Kamph
Рет қаралды 377 М.
Turn AI Images into 3D Animated Characters: Tutorial
28:58
Prompt Muse
Рет қаралды 562 М.
Multi-ControlNet - How I got consistent videos!
9:24
enigmatic_e
Рет қаралды 59 М.
Players vs Corner Flags 🤯
00:28
LE FOOT EN VIDÉO
Рет қаралды 63 МЛН