No video

Deforum + Controlnet IMG2IMG (TemporalNet)

  Рет қаралды 26,431

enigmatic_e

enigmatic_e

Күн бұрын

Пікірлер: 133
@enigmatic_e
@enigmatic_e 7 ай бұрын
NOTE: Make sure you're using 1.5 model with this setting file and turn off any unused controlnets.
@ysy69
@ysy69 6 ай бұрын
what happens when you use SDXL model ?
@enigmatic_e
@enigmatic_e 6 ай бұрын
I think its possible, just you would need xl controlnets and there arent as many for xl @@ysy69
@bonsai-effect
@bonsai-effect Жыл бұрын
Very easy to follow tutorial... so happy that as usual, you don't jump all over the place like some other ppl. Always a pleasure to watch and learn from your tuts! (mega thanks for the settings file too!!)
@enigmatic_e
@enigmatic_e Жыл бұрын
Glad I could help!
@HopsinThaGoat
@HopsinThaGoat Жыл бұрын
that Mario clip is amazing
@InTheCity3D
@InTheCity3D 8 ай бұрын
Its rare for things to work out so quickly in this field. Hats off to you, you're explanation was solid.
@kenrock2
@kenrock2 11 ай бұрын
I love you very much man... It took me alot of attempts to troubleshoot the errors that the controlnet is not working properly due to conflict extension, its best for beginners to have a clean install A1111 with just Deforum + Controlnet extension only if you have trouble understanding the terminal activity what is going on. By the way a1111 doesn't really work well on old version 1.4 which causes alot of buggy UI, I switched to version 1.5.2 it works better after that. I got amazing results following this tutorial.... thanks alot
@carsoncarr-busyframes619
@carsoncarr-busyframes619 11 ай бұрын
yeah, I've been trouble shooting for a few hours after some conflict is causing deforum to not load even though it's installed. thanks, I'll try 1.5.2 (currently using 1.6)
@kenrock2
@kenrock2 11 ай бұрын
@@carsoncarr-busyframes619 also notice due to recent version 1.6 update it also doesn't work well with this tutorial, even with the recent deforum update it some how doesn't use the controlnet properly (clean install). So stick to version 1.5.2 , i have no issue after downgrade it
@sergiogonzalez2611
@sergiogonzalez2611 4 ай бұрын
wonderfull work man
@Ai_Vs_Original
@Ai_Vs_Original Жыл бұрын
how to fix this Error: 'Video file C:\Automatic1111\stable-diffusion-webui has format 'c:\automatic1111\stable-diffusion-webui', which is not supported. Supported formats are: ['mov', 'mpeg', 'mp4', 'm4v', 'avi', 'mpg', 'webm']'. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.
@TheRainbowPilot
@TheRainbowPilot Жыл бұрын
It was a bug in latest build. Should be patched now please update Deforum.
@eyevenear
@eyevenear Жыл бұрын
instant like! I think the best solution for now is to separate the character form the background, so you can process foreground and background with more freedom and consistency, and only then put them back together in AE after a good deflickering pass.
@enigmatic_e
@enigmatic_e Жыл бұрын
True
@tamiltrivia
@tamiltrivia Жыл бұрын
How to separate character from background?
@eyevenear
@eyevenear Жыл бұрын
@@tamiltrivia Rotoscoping or You shoot the original video in a green screen room, or any solution between the two.
@xShxdowTV
@xShxdowTV Жыл бұрын
with mask @@tamiltrivia
@TheKuzmann
@TheKuzmann Жыл бұрын
​@@eyevenear or you can use one of many background removal extensions available for SD, like Depthmap scripts, for example...
@bobwinberry
@bobwinberry 8 ай бұрын
Great video - thanks. FYI: my settings kept crashing and I did a lot of different efforts to stop it, but it seems the only thing that worked was limiting my options on the Hieght/Width settings to: Horizontal: 1024 x 576 and Vertical: 576 x 1024 - thanks again for the great video and info
@theunderdowners
@theunderdowners 11 ай бұрын
Doumo Doumo, This is the most coherent/consistent run I've done, thank you very much.
@GuyTheAnimated
@GuyTheAnimated 11 ай бұрын
thank you for this! stable diffusion and all the possibilities, and things yet to be discovered, really is a driving force for me :)
@Injaznito1
@Injaznito1 Жыл бұрын
Thanx for the file and tutorial E! I've been drag'in my feet using TemporalNet in my workflow. ima give this a try on my current project.
@enigmatic_e
@enigmatic_e Жыл бұрын
👍🏽
@blockchaindomain
@blockchaindomain 10 ай бұрын
THANK YOU! THIS REALLY HELPED ME LEARN ALOT!!!!!
@dmitrym.6578
@dmitrym.6578 10 ай бұрын
Thank you very much. Very informative video.
@LifeSwapped
@LifeSwapped 11 ай бұрын
I love you!
@GoodArt
@GoodArt 11 ай бұрын
you rule, thanks.
@SnapAir
@SnapAir Жыл бұрын
Thanks for the tutorial legend!
@enigmatic_e
@enigmatic_e Жыл бұрын
👍🏽 no problem
@judgeworks3687
@judgeworks3687 Жыл бұрын
Love yr videos. Also nice call out to you from corridor crew on recent video of theirs.
@enigmatic_e
@enigmatic_e Жыл бұрын
🙏🏽🙏🏽
@blender_wiki
@blender_wiki 11 ай бұрын
To achieve more consistent results with your videos, try using the MagicMask and Depth nodes in your DVR software then change the background by blurring it or replacing it with a flat one. Avoid using MP4 files, as they can introduce temporal compression artifacts that lead to unwanted noise and loss of coherence. Instead, opt for image sequences or MP4 files with zero compression for better outcomes.
@keYserSOze2008
@keYserSOze2008 11 ай бұрын
Real digital artists need to get on this, they absolutely destroy these pretenders... "Looks smooth to me" 🤣
@reallybigname
@reallybigname Жыл бұрын
Right on.
@aiximagination
@aiximagination Жыл бұрын
Awesome video!
@SatriaTheFlash
@SatriaTheFlash Жыл бұрын
This is what i waiting for, cause i've been struggle with AI Animation, especially Warpfusion because i can't buy Colab Pro
@enigmatic_e
@enigmatic_e Жыл бұрын
This is exactly why I made this 👍🏽
@Herman_HMS
@Herman_HMS Жыл бұрын
great tutorial and thanks for settings file!
@enigmatic_e
@enigmatic_e Жыл бұрын
👍🏽no problem
@ronnykhalil
@ronnykhalil 11 ай бұрын
w0w!
@georgekolbaia2033
@georgekolbaia2033 Жыл бұрын
Hey! Thanks for yet another great tutorial! I was wondering, what are the advantages and disadvantages of Deforum+TemporalNet VS Colab+Warpfusion? When would you use one over the other? Which one gives you better results? I get that the Deforum is local and free as opposed to Collab+Warpfusion, but are there any other important differences that affect the quality of the output?
@enigmatic_e
@enigmatic_e Жыл бұрын
I would say warp gives more temporal coherence and consistency. But deforum is a great alternative if you can’t afford to warp. I’ve seen some deforum stuff that looks very close to warp.
@aarvndh5419
@aarvndh5419 Жыл бұрын
Thanks so much for the video and the settings file
@enigmatic_e
@enigmatic_e Жыл бұрын
No problem 👍
@artyfly
@artyfly Жыл бұрын
cool! thanks!
@bardaiart
@bardaiart Жыл бұрын
Thanks a lot! :)
@graphicsseion790
@graphicsseion790 11 ай бұрын
Hi,thanks for your videos, I have been trying several times and several videos in a row for this style of animation with videos in deforum+controlnet. The problem is that even following all your indications, the frames that the output extracts are random and have nothing to do with the video init. the path of the video in video init and in the controlnet is correct, I have played with the values of strengh and cfg, even with the com alpha that I read in some other comment in videos. I would appreciate some light, thanks again.
@enigmatic_e
@enigmatic_e 11 ай бұрын
i would suggest you join my discord, there are people who have solved many issues. It's also easier because you could share screen shots. Link to discord is in description.
@MrKrealfedorenko
@MrKrealfedorenko 10 ай бұрын
I think I have the same problem. Links for the Video (with dancer) are right, settings are the same...but the character after Generation is not moving... :-/
@marcobelletz4734
@marcobelletz4734 Жыл бұрын
Really cool as all of your contents, but as many other people I get some weird error: load_img() got multiple values for argument 'shape'. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \ I changed the slash as suggested but nothing changes. I checked if the input frames were correctly generated and yes, I have all input frames in separate folders, as many as the ControlNet modules enabled. Any ideas about how to fix this?
@NotThatOlivia
@NotThatOlivia Жыл бұрын
nice!!!
@imtaha964
@imtaha964 Жыл бұрын
i love u bro😍😍😍
@imtaha964
@imtaha964 Жыл бұрын
u so much helping thank u
@NguyenNhatHuyDGM
@NguyenNhatHuyDGM 11 ай бұрын
I got this message after first frame genrated. Can someone help me fix this, thanks Error: 'OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op' '. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.
@MrPlasmo
@MrPlasmo Жыл бұрын
everything was working fine until I got this: User friendly error message: Error: Video file C:\Users\k\stable-diffusion-webui has format 'c:\users\k\stable-diffusion-webui', which is not supported. Supported formats are: ['mov', 'mpeg', 'mp4', 'm4v', 'avi', 'mpg', 'webm']. Please, check your schedules/ init values. anyone know why? Deforum worked for 2 days prior... :(
@MrPlasmo
@MrPlasmo Жыл бұрын
found the answer its a bug in the new version: Guys for people who get error with video control net to downgrade go to extention tab in automatic1111 extention deforum and write command git checkout 0949bf428d5ef9ce554e9cdcf5fc4190e2c1ba12 it will downgrade to aug13 version. i gess soon when bug fixed maybe u will need to reinstal deforum or write git checkout master
@Switch620
@Switch620 Жыл бұрын
@@MrPlasmo Thanks man!
@NoName-yd5cp
@NoName-yd5cp Жыл бұрын
great and quick dive into deforum. ever tried to auto-mask people with ebsynth extension for a1111 -> pang extraction and input mask-sequence back to deforum? my PC isnt beefy enough to try :/
@MajomHus
@MajomHus 11 ай бұрын
You will have a lot less extra things appear if you stick close to the original resolutions of the model, so 512 or 768.
@fedoraq2d3dcreative61
@fedoraq2d3dcreative61 9 ай бұрын
Hi, thanks for the great training video I have a question, where can I find the source of the video with the dancer. Thank you :)
@tomibeg
@tomibeg Жыл бұрын
Hey! Nice video, thanks. Btw, maybe you've tested if it's possible to run similar process with TemporalNet v2 and init image?
Жыл бұрын
Hello, very good your video tutorial. I almost got the same result, but in my case the first image is generated based on the first frame of my video, but the others no longer follow the video and start generating random images of Mario. I already checked all the settings and I couldn't solve it. Any idea? Thanks.
Жыл бұрын
@fryvfx I will review the type of movement. Thank you very much!
@gonefull5036
@gonefull5036 11 ай бұрын
hai bro, i am happy to look your tutorial, is very amazing bro, one question for deforum "init image", does is work to image sequence?
@enigmatic_e
@enigmatic_e 11 ай бұрын
Mmm not sure never done it that way but I think it has to be a video file
@ParvathyKapoor
@ParvathyKapoor Жыл бұрын
Any idea how to make non flickering video?
@xShxdowTV
@xShxdowTV Жыл бұрын
tile + TN - deflicker in davinci
@anyosaurus8545
@anyosaurus8545 11 ай бұрын
Hi, why my result of the video isn't the same as my video init? my result is the same as the prompt but not consistenty lookalike my video init :(
@aminshallwani9369
@aminshallwani9369 11 ай бұрын
Thanks for sharing this video. I need to know if we have our own Prompt and generated a image from img2img, and then paste that prompt in the prompt area so how that will work. I have did that and got the error TypeError: 'NoneType' object is not iterable *END OF TRACEBACK* User friendly error message: Error: 'NoneType' object is not iterable. Please, check your schedules/ init values. Please need assistance Thanks
@jamminmandmband
@jamminmandmband 11 ай бұрын
In the past I have gotten this to work. But this time around, I do not know what is happening. I have followed your instructions, but keep getting this error. User friendly error message: Error: images do not match. Please, check your schedules/ init values. I have been using chat gpt to work out what is going on, but nothing seems to resolve this. Any thoughts?
@dagovegas
@dagovegas 6 ай бұрын
I have the same issue, did you manage to fix it?
@jamminmandmband
@jamminmandmband 6 ай бұрын
@@dagovegas I have not solved it yet. But honestly, I have not messed with it much as of recently.
@dagovegas
@dagovegas 6 ай бұрын
@@jamminmandmband i figured out an alternative solution. Use each frame of the video as input for img2img with control net (pose, hed and soft edges).
@AIWarper
@AIWarper Жыл бұрын
Does this work with SDXL models and LORAs? Or is Temporal limited to 1.5 still? Great video by the way. I look forward to every notification I get when you post! I have a recommendation if you are accepting - do one of these without a humanoid. Every one is using humans... but I'd love to see if you could apply this to say... a rendered output of a creature from Blender or some non humanoid kind of thing.. I suspect it wouldnt be as consistent?
@enigmatic_e
@enigmatic_e Жыл бұрын
Great suggestion! I will definitely consider that! And when it comes to SDXL, there still aren’t SDXL controlnets that are integrated into automatic 1111 yet. Hopefully soon!!
@carsoncarr-busyframes619
@carsoncarr-busyframes619 11 ай бұрын
anyone else getting "Error: ''NoneType' object is not iterable. Please, check your schedules/ init values." ? I've been trying to get this to work for almost a week and narrowed it down to an issue with control net. when I disable the control nets, it works but is obviously not temporally consistent. I've tried it with automatic 1111 1.6 and automatic 1111 1.52... I've tried using enigmatic's settings file and also from scratch. control net IS working with still images so maybe something with it broke with the latest version of deforum?
@ramemi1752
@ramemi1752 10 ай бұрын
FIX: I need to have the strength at least at 0:(0.5), anything below it and the results show completely no relation to the input video. Also 'video input' has to be selected
@enigmatic_e
@enigmatic_e 10 ай бұрын
Video doesn’t have to be selected. There is something in the settings not right if it’s not working.
@Panchocr888
@Panchocr888 Жыл бұрын
Hey enigmatic_e, thanks this video was very helpful, by any chance do you have a video where you explain some of the prompts you use, i dont quite get for example why some of the prompts have (:0,8) next to the words, thx in advance!
@enigmatic_e
@enigmatic_e Жыл бұрын
No i don’t but i should make one.
@siriotrading
@siriotrading Жыл бұрын
I follow all the steps but I get this error after the first frame. Error: OpenCV(4.8.0) (-209: input argument sizes do not match) The operation is neither "array op array" (where arrays have the same size and same number of channels), nor " array op scalar" , nor 'scalar op array' in function 'cv::arithm_op' . Check your programs/init values please. Also make sure you don't have a backslash in any of your PATHS - use / instead of \. What can it be caused by? Has anyone had my problem?
@inpsydout
@inpsydout Жыл бұрын
I'm getting this same error..
@ValiCas
@ValiCas 11 ай бұрын
Thanks for the tutorial! :) I am having an issue, I followed the steps, loaded the working file and copied/past the path correctly everywhere, but the final result won't follow the video init, and do a random animation just considering the prompts. What could it be?
@kenrock2
@kenrock2 11 ай бұрын
I also face the same problem, there is a problem if you are using A1111 ver 1.6, the controlnet doesnt really register properly in that version, use version 1.5.2 ... also check the terminal to see any errors occur in controlnet, that is where u can start troubleshooting
@Venkatesh_006
@Venkatesh_006 Жыл бұрын
Sir I am Getting this Error, ValueError: 1 is not in list What should I do to Solve This ?
@YaBuoyCJ
@YaBuoyCJ Жыл бұрын
same
@FirdausHayate
@FirdausHayate 5 ай бұрын
i got error ('OpenCV(4.9.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op' '. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.) .. can anyone help or can someone solve it?
@yoktavyakhanna6967
@yoktavyakhanna6967 Жыл бұрын
Hey it's running and generating pretty well but for some reason it isn't actually following the video and creating something of it's own is there any way to control how similar or different the output comes from the original video
@bonsai-effect
@bonsai-effect Жыл бұрын
try disabling the controlnet with softedge.
@enigmatic_e
@enigmatic_e Жыл бұрын
I would play with the tile strength, cfg, or comp alpha schedule. Also make sure you’re adding video path to all the controlnets and the main init video.
@yoktavyakhanna6967
@yoktavyakhanna6967 Жыл бұрын
@@enigmatic_e thank you it worked with putting the Comp Alpha higher, love your tutorials and your work with corridor crew please keep it up
@user-xu8zy7ge1x
@user-xu8zy7ge1x Жыл бұрын
good video, i have a question : can you use a CLOTHES lora in the prompt ?? it will help with the consistance outfit , and might give a better result if its possible to put it !
@enigmatic_e
@enigmatic_e Жыл бұрын
I dont see why you can't use a lora to change clothes. I technically gave this guy a mario outfit and he wasn't wearing an outfit but you if for example, you have someone dressed as the character, you can probably get some amazing results.
@Ray-01-01
@Ray-01-01 Жыл бұрын
Bro, I wanted to ask you something, could you tell me please. Have you seen different AI videos where they show the ’evolution of something’?, how ‘something’ changed over time. (For example, there is an AI video showing the ‘evolution of fashion'. At the beginning, the animation shows the fashion styles of the beginning of the last century, then the 50s-60s-70s and so on to our time) please help bro, I tried to do it 1000 times through Deforum, but I can't get such an animation in any way (I know that the question does not apply to this video, but nevertheless, I hope for your answer)
@elijahdavis-xh2zt
@elijahdavis-xh2zt 11 ай бұрын
How would you compare Stable Warpfusion with Deforum Stable Diffusion?
@epicddgt
@epicddgt Жыл бұрын
Hi enigmatic i have seen your videos some time ago , i was wondering do you know or recommend a tutorial to install it with an mac m1 chip ? hope you have a great week !
@enigmatic_e
@enigmatic_e Жыл бұрын
I don’t know unfortunately, but maybe this helps? github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon
@yanning5116
@yanning5116 11 ай бұрын
hello thank you very much for your video, there is one thing that was I can't open your link for Settings file , Is there another way to solve this problem? thank you very much again
@zeeshistargamer
@zeeshistargamer 11 ай бұрын
Great wonderfull video, But please can you help me with this Error, I watched your videos daily but i face this error when I enable the controlnet in deforum to generate the video "Error: ''NoneType' object is not iterable'. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli." if i disable the contronet then there is not error but the video didnt match with the reference video. I am try to solve from 1 month but didnt find any solution. please can you help me in this... Thanks ♥♥♥
@Moise_s.
@Moise_s. 10 ай бұрын
só uma duvida na questão de copiar e colocar Settings File não esta indo
@dagovegas
@dagovegas 6 ай бұрын
I've tried to replicate it but this error always pops out: Error: images do not match. Please, check your schedules/ init values. Does someone know how to fix it?
@enigmatic_e
@enigmatic_e 6 ай бұрын
hm not sure why. What kind of checkpoint are you using?
@m3dia_offline
@m3dia_offline Жыл бұрын
How would you compare this in terms of being flicker free and consistent to warp fusion?
@HopsinThaGoat
@HopsinThaGoat Жыл бұрын
even the one With the comp set to 1 was fire
@enigmatic_e
@enigmatic_e Жыл бұрын
👍🏽
@Fabzter1
@Fabzter1 Жыл бұрын
Great video! Would this work in colab?
@enigmatic_e
@enigmatic_e Жыл бұрын
I haven’t tried this in colab so I’m not sure, sorry.
@user-ld3si3zs9o
@user-ld3si3zs9o 2 ай бұрын
does anybody know how to take it from an original animated or comic character and make it human ??
@AIWarper
@AIWarper Жыл бұрын
When I select the control net tab I see CN1 - 5, and I see the enable check box, but I do not see settings available - any thoughts on why this would be? Edit: reloading the terminal and UI managed to let me enable the CN1 but the other tabs are still blank Edit 2: It happens when I import your settings. I suspect I have to manually input them as the ctrl net tabs are stuck on forever loading
@AIWarper
@AIWarper Жыл бұрын
edit: 3: Manually input all settings worked. Importing from a settings file causes my WebUI to freeze on loading forever. I am also encountering this error anytime I change the resolution from 512 :512 to anything else (was trying 540 x 760) "error: images do not match. check your schedules/ init values please. also make sure you don't have a backwards slash in any of your paths - use / instead of \." I set the inputs to all defaults on a fresh run and slowly changed the settings until I could recreate the error.... and it happens from the resolution change
@enigmatic_e
@enigmatic_e Жыл бұрын
Don’t manually type in resolution, just use slider, deforum has a strange issue with typing in exact values
@TheMaxvin
@TheMaxvin 11 ай бұрын
Which type of ControlNet did you use for this animation?
@enigmatic_e
@enigmatic_e 11 ай бұрын
It’s in the settings file I provided in the description
@TheMaxvin
@TheMaxvin 11 ай бұрын
@@enigmatic_e Thanks, one question after all - whether the sequence in which the ControlNet models are applied matters?
@falialvarez
@falialvarez 10 ай бұрын
I used the parameters of this guy: kzbin.info/www/bejne/qKrXoH6KqJJgj5Y ,but i use your controlnet configuration changing only the order and the Weight: (1º tile Weight(1.5),2º openpose full Weight(1), 3º hed softdedge Weight(1) and 4º temporal net. The coherence is amazing. did you see temporalnet model have a versión 2º? I try to use it but in deforum i cant. congratulation for your videos, im a fan.
@TheMaxvin
@TheMaxvin 11 ай бұрын
SD write me that TemporalNet is unofficial model and advice me to refuse her.
@enigmatic_e
@enigmatic_e 11 ай бұрын
It is unofficial but should safe. It’s up to you though. It’s the same developer who created TemporalKit, she’s on twitter sharing updates.
@TheMaxvin
@TheMaxvin 11 ай бұрын
As for me so no problem, A1111 is nervous)@@enigmatic_e
@eblake4250
@eblake4250 Жыл бұрын
Promo-SM 💃
@MalikKayaalp
@MalikKayaalp Жыл бұрын
Amazing. Hello, I really like the tutorial videos you make. and I am grateful to you for this, I only ask you for one thing. How can we make more abstract abstract works. Can you make a lesson for this. For example, I tried to bring a smoke animation with different colors and more abstract still. I was not successful. I think I need to be more interested in temporalnet. Thank you
@cyberdogs_
@cyberdogs_ Жыл бұрын
how to solve this error (Error: 'A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.'. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.)...🥲🥲
@sebastiendaniel5794
@sebastiendaniel5794 11 ай бұрын
I had this issue, i changed the Checkpoint to be compatible with SD 1.5 and the error was gone
@TheKuzmann
@TheKuzmann Жыл бұрын
@enigmatic_e where did you find yaml file? looking at hugging face but there is no diff_control_sd15_temporalnet_fp16.yaml
@TheKuzmann
@TheKuzmann Жыл бұрын
oo right, thnx
@FortniteJama
@FortniteJama 10 ай бұрын
Really happy with results I'm getting after your tutorial, still a way to go, but way less frustration. Think you showing the frustration aspect, helped me push through, thankyou finally feel like I'm making progress. kzbin.info/www/bejne/m5bdY4OQnNifn6c
@enigmatic_e
@enigmatic_e 10 ай бұрын
So happy to hear this!
@eyeless98
@eyeless98 Жыл бұрын
Great video!!! Have you noticed how much VRAM do 3 CN use? I want to upgrade from a 3060ti to a 4070 for that extra 4GB of VRAM because I cant use 3CN right now without taking 8 hours for a generation.
@enigmatic_e
@enigmatic_e Жыл бұрын
I use to run 3 controls nets when I had a 3080 10gb. But I couldn’t push the resolution too high
@joonienyc
@joonienyc Жыл бұрын
same here , 3060 cant do more than 3 , it just tooo long of waitting @@enigmatic_e
ANIMATEDIFF COMFYUI TUTORIAL - USING CONTROLNETS AND MORE.
24:54
enigmatic_e
Рет қаралды 91 М.
WOW! NEW ControlNet feature DESTROYS competition!
9:08
Sebastian Kamph
Рет қаралды 375 М.
Harley Quinn's plan for revenge!!!#Harley Quinn #joker
00:49
Harley Quinn with the Joker
Рет қаралды 33 МЛН
小丑把天使丢游泳池里#short #angel #clown
00:15
Super Beauty team
Рет қаралды 42 МЛН
天使救了路飞!#天使#小丑#路飞#家庭
00:35
家庭搞笑日记
Рет қаралды 86 МЛН
黑天使遇到什么了?#short #angel #clown
00:34
Super Beauty team
Рет қаралды 43 МЛН
ControlNet Revolutionized How We Use AI To Generate Images
8:08
Краткий гайд по Deforum
13:17
Replicart
Рет қаралды 15 М.
Lord Voldemort | ZBrush Timelapse
22:59
ArtBul
Рет қаралды 2,6 М.
ToonCrafter - This is only the beginning!
14:15
enigmatic_e
Рет қаралды 11 М.
Create consistent characters with Stable diffusion!!
26:41
Not4Talent
Рет қаралды 204 М.
Harley Quinn's plan for revenge!!!#Harley Quinn #joker
00:49
Harley Quinn with the Joker
Рет қаралды 33 МЛН