ComfyUI: Master Morphing Videos with Plug-and-Play AnimateDiff Workflow (Tutorial)

  Рет қаралды 30,883

Abe aTech

Abe aTech

Күн бұрын

Пікірлер: 147
@MindsMystery24
@MindsMystery24 Ай бұрын
At first i didn't understand why you make this part 06:35 Supercharge the Workflow, but after getting a MemoryError now i know what to do, we need more thinkers like you
@Hebrideanphotography
@Hebrideanphotography 5 ай бұрын
People like you are so important. Too many gatekeepers out there. ❤
@ZergRadio
@ZergRadio 27 күн бұрын
I really thought this was just gonna be junk like so many other "Video/animation" ones I already tried. And I am very impressed by it, simply because it worked. And my video came out really nice. Subscribed!
@AI.Studios.4U
@AI.Studios.4U Ай бұрын
Thanks to you I have created my first video using ComfyUI! Your video is priceless!
@gorkemtekdal
@gorkemtekdal 8 ай бұрын
Great video! I want to ask that can we use init image for this workflow like we do on Deforum? I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts. Do you know how does it possible on ComfyUI / AnimateDiff? Thank you!
@abeatech
@abeatech 8 ай бұрын
I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.
@jdsguam
@jdsguam 5 ай бұрын
I've been having fun with this workflow for a few days already. It is amazing what can be done on a laptop in 2024.
@ted328
@ted328 8 ай бұрын
Literally the answer to my prayers, have been looking for exactly this for MONTHS
@1010mrsB
@1010mrsB Ай бұрын
You're amazing!! I was lost for so long and when I found this video I was found
@CoqueTornado
@CoqueTornado 8 ай бұрын
great tutorial, I am wondering... how many vram does this setup need?
@abeatech
@abeatech 8 ай бұрын
i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi
@CoqueTornado
@CoqueTornado 8 ай бұрын
@@abeatech thank you!! will try the two suggestions! congrats for the channel!
@alessandrogiusti1949
@alessandrogiusti1949 7 ай бұрын
After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!
@RokSlana
@RokSlana 3 ай бұрын
This looks awesome. I gotta give it a try asap. Thanks for sharing.
@paluruba
@paluruba 7 ай бұрын
Thank you for this video! Any idea what to do when the videos are blurry?
@jesseybijl2104
@jesseybijl2104 7 ай бұрын
Same here, any answer?
@EternalAI-v9b
@EternalAI-v9b Ай бұрын
Hello, how did you make that effect with your eyes at 0:20 please?
@stinoway
@stinoway 3 ай бұрын
Great video!! Hope you'll drop more knowledge in the future!
@retrotiker
@retrotiker 4 ай бұрын
Great tutorial! Your content is super helpful. Just wondering, where are you these days? We'd love to see more Comfy UI tutorials from you!
@andrruta868
@andrruta868 5 ай бұрын
I get too fast transitions between images. I did not find where you can adjust the transition time. I will be grateful for the advice.
@SAMEGAMAN
@SAMEGAMAN Ай бұрын
Thank you for this video❤❤
@AlderoshActual-z3k
@AlderoshActual-z3k 5 ай бұрын
Awesome tutorial! I've been getting used to the ComfyUI workflow...love the batch image generation!! However, do you have any tips on how to make LONGER text to video animations? I've seen several YT channels that have very long format morphing videos...well over an hour. I'd like to create videos that average around 1 minute, but can't sort out how to do it!
@TechWithHabbz
@TechWithHabbz 8 ай бұрын
You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁
@abeatech
@abeatech 8 ай бұрын
Thanks for the sub!
@SylvainSangla
@SylvainSangla 7 ай бұрын
Thanks a lot for sharing this, very precise and complete guide ! 🥰 Cheers from France !
@GNOM_
@GNOM_ 3 ай бұрын
Hello! Big thanks to you, bro. I learned how to make different animations from your video. I watched many other tutorials, but they didn't work for me. You explained everything very clearly. Tell me, can I insert motion masks myself, or do I have to insert link addresses only? Are there any other websites with different masks? Greetings from UKRAINE!!!
@tadaizm
@tadaizm 3 ай бұрын
Розібрався?
@GNOM_
@GNOM_ 2 ай бұрын
@@tadaizm так, розібрався. Просто скопіювати свою маску як путь і вставити.Нажаль масок мало.Скачати інщі маски теж та щє проблема, фіг знадеш.
@user-yo8pw8wd3z
@user-yo8pw8wd3z 7 ай бұрын
good video. where can i find the link to the additional video masks? I don't see it in the description
@hoptoad
@hoptoad 6 ай бұрын
this is great! do you know if there is a way to "batch" many variations where you can give each of the four guidance images a folder and it will run through and do a new animation with different source images multiple times?
@Ai_mayyit
@Ai_mayyit 7 ай бұрын
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture' your video timestep: 04:20
@SF8008
@SF8008 7 ай бұрын
Amazing! Thanks a lot for this!!! btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)
@EmoteNation
@EmoteNation 4 ай бұрын
Bro u r doing really good job, i hav only one question, in this video u did image to video morphing so can u do video to video morphing? Or can u make morphing video by using only text / prompt?
@mcqx4
@mcqx4 8 ай бұрын
Nice tutorial, thanks!
@abeatech
@abeatech 8 ай бұрын
Glad it was helpful!
@juliensylvestreeee
@juliensylvestreeee 3 ай бұрын
Nice tutorial, even if it was very hard for me to set this up. Which SD 1.5 model do you recommand to install ? I just wanna morph input images, and a very realistic render. If someone could help :3
@Injaznito1
@Injaznito1 7 ай бұрын
NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!
@lucagenovese7207
@lucagenovese7207 5 ай бұрын
Insane!!!!! Ty so much!
@yannickweineck4302
@yannickweineck4302 Ай бұрын
in my case it doesnt really use the images i feed it. I already tried to find all the settings which result in almost no morph and basically all 4 original images standing still but i cant seem to find them.
@pedrobrandao7664
@pedrobrandao7664 5 ай бұрын
Great tutorial
@petertucker455
@petertucker455 6 ай бұрын
Hi Abe, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?
@人海-h5b
@人海-h5b 8 ай бұрын
Help! I encountered this error while running it Error occurred when executing IPAdapterUnifiedLoader: Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'
@abeatech
@abeatech 8 ай бұрын
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@ComfyCott
@ComfyCott 7 ай бұрын
Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!
@chinyewcomics
@chinyewcomics 6 ай бұрын
Hi, does anybody know how to add more images to create a longer video?
@Caret-ws1wo
@Caret-ws1wo 6 ай бұрын
Hey, my animations come out super blurry and are no where near as clear as yours. I can barely make out the monkey, it's just a bunch of moving brown lol. Is there a reason for this?
@DanielMatotek
@DanielMatotek Күн бұрын
Same did you ever figure it out
@Caret-ws1wo
@Caret-ws1wo 16 сағат бұрын
@@DanielMatotek This was a while ago, but i believe I changed models
@goran-mp-kamenovic6293
@goran-mp-kamenovic6293 5 ай бұрын
5:30 what do you do to see the duration :)
@evgenika2013
@evgenika2013 6 ай бұрын
Everything is great, but i have blurry result on my horizontal artwork. Any suggestion what to check on it?
@aslgg8114
@aslgg8114 8 ай бұрын
What should I do to make the reference image persistent
@Danaeprojectful
@Danaeprojectful 2 ай бұрын
hi, I would like the first and last frames to exactly match the images I uploaded without being reinterpreted. Is this possible? In the case how should I do it? Thanks
@MariusBLid
@MariusBLid 8 ай бұрын
Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram
@produccionesvoid
@produccionesvoid 6 ай бұрын
when i put on manager install missing nodes i cant do it and said: To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser... what can do that?
@MSigh
@MSigh 7 ай бұрын
Excellent! 👍👍👍
@Murdalizer_studios
@Murdalizer_studios 5 ай бұрын
nice bro. Thank you🖖
@frankiematassa1689
@frankiematassa1689 7 ай бұрын
Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this
@juginnnn
@juginnnn 3 ай бұрын
how can I fix "Motion module 'AnimateLCM_sd15_t2v.ckpt' is intended for SD1.5 models, but the provided model is type SD3."???
@damird9635
@damird9635 6 ай бұрын
Working, but when i select "plus high strenght", i get clip vision error. What im i missing, i downloaded everything.... VIT-G is the problem for some reason?
@Halfgawd_Halfdevil
@Halfgawd_Halfdevil 7 ай бұрын
Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?
@tetianaf5172
@tetianaf5172 7 ай бұрын
Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help
@Cats_Lo_Ve
@Cats_Lo_Ve 6 ай бұрын
How i can get progress bar like you on top of the screen? I must reainstall full comfy UI for this workflow. I instaled crystools but progress bar doesn't appear on top :/ Thank you for your video you are a god!
@GiancarloBombardieri
@GiancarloBombardieri 6 ай бұрын
it worked so fine. but now it sends an error at the Load video Path, is there any update??
@randomprocess7876
@randomprocess7876 2 ай бұрын
Anybody know how to scale this to more than 4 images.. ive tried but the masks are messing up the animation from the cloned nodes
@randomprocess7876
@randomprocess7876 2 ай бұрын
want to make longer videos
@cabb_
@cabb_ 8 ай бұрын
ipiv did an incredible job with this workflow!. Thanks for the tutorial.
@SapiensVirtus
@SapiensVirtus 6 ай бұрын
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
@velvetjones8634
@velvetjones8634 8 ай бұрын
Very helpful, thanks!
@abeatech
@abeatech 8 ай бұрын
Glad it was helpful!
@kwondiddy
@kwondiddy 7 ай бұрын
I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:" I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?
@axxslr8862
@axxslr8862 7 ай бұрын
in my comfy UI there is no manager option ...... help please
@ESLCSDivyasagar
@ESLCSDivyasagar 7 ай бұрын
search in youtube how to install
@ollyevans636
@ollyevans636 5 ай бұрын
i don't have an ipadapter folder in my models folder, should i just make one?
@AlexDisciple
@AlexDisciple 6 ай бұрын
Thanks for this. Do you know what could be causing this error : Error occurred when executing KSampler: Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 64, 36] to have 5 channels, but got 4 channels instead
@AlexDisciple
@AlexDisciple 6 ай бұрын
I figured out the problem, I was using the wrong ControlNet. I am having a different issue though, where my initial output is very "noisy", as if ther was latent noise all over it. Is it imporant for the source images to be in the same aspect ratio as the output?
@AlexDisciple
@AlexDisciple 6 ай бұрын
Ok found the solution here too, I was using a photorealistic model, which somehow the workflow doesn't seem to like. Switching to juggernaut fixed it
@ywueeee
@ywueeee 7 ай бұрын
can could one add some kind of ip adaptar to add your own face to transform?
@saundersnp
@saundersnp 7 ай бұрын
I've encountered this error : Error occurred when executing RIFE VFI: Tensor type unknown to einops
@yomi0ne
@yomi0ne Ай бұрын
copying video address of the animation doesn't work, it copies an .webm link, please help :(
@MichaelL-mq4uw
@MichaelL-mq4uw 8 ай бұрын
why do you need controlnet at all? can it be skipped and morph without any mask?
@devoiddesign
@devoiddesign 7 ай бұрын
Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node. !!! Exception during processing!!! IPAdapter model not found. Traceback (most recent call last): File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models raise Exception("IPAdapter model not found.") Exception: IPAdapter model not found.
@tilkitilkitam
@tilkitilkitam 7 ай бұрын
same problem
@tilkitilkitam
@tilkitilkitam 7 ай бұрын
ip-adapter_sd15_vit-G.safetensors - install this from the manager
@devoiddesign
@devoiddesign 7 ай бұрын
@@tilkitilkitam Thank you for responding. I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.
@efastcruex
@efastcruex 7 ай бұрын
Why my generated animation very different from the reference images
@cohlsendk
@cohlsendk 7 ай бұрын
Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''
@cohlsendk
@cohlsendk 7 ай бұрын
Got it :D
@ellopropello
@ellopropello 4 ай бұрын
how awesome is that! but what needs to be done to get rid of these errors: When loading the graph, the following node types were not found: ADE_ApplyAnimateDiffModelSimple VHS_SplitImages SimpleMath+ ControlNetLoaderAdvanced ADE_MultivalDynamic VHS_VideoCombine BatchCount+ ADE_UseEvolvedSampling FILM VFI RIFE VFI Color Correct (mtb) VHS_LoadVideoPath IPAdapterUnifiedLoader ACN_AdvancedControlNetApply ADE_LoadAnimateDiffModel ADE_LoopedUniformContextOptions IPAdapterAdvanced CreateFadeMaskAdvanced
@yakiryyy
@yakiryyy 8 ай бұрын
Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images. The results I get are pretty different from the reference images. Am I wrong in my assumption?
@abeatech
@abeatech 8 ай бұрын
You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation
@efastcruex
@efastcruex 7 ай бұрын
@@abeatech Is there any way to make the result more like reference images
@ImTheMan725
@ImTheMan725 7 ай бұрын
Why can't your morph 20/50 pictures?
@CarCrashesBeamngDrive
@CarCrashesBeamngDrive 7 ай бұрын
cool, how long did it take you?
@TinyLLMDemos
@TinyLLMDemos 7 ай бұрын
where do i get your input images
@rowanwhile
@rowanwhile 8 ай бұрын
Brilliant video. thanks so much for sharing your knowledge.
@rayzerfantasy
@rayzerfantasy 3 ай бұрын
How much GPU VRAM is needed?
@balibike9024
@balibike9024 4 ай бұрын
I've got an error message Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 573, in load_models raise Exception("IPAdapter model not found.") What shoud I do ?
@balibike9024
@balibike9024 4 ай бұрын
Success now ! I re-install ip-adapter_sd15_vit-G.safetensors from the manager
@zarone9270
@zarone9270 7 ай бұрын
thx Abe!
@DanielMatotek
@DanielMatotek Күн бұрын
Tried for ages couldn't make it work, every image is very pixelated and crazy cannot wor it out
@TinyLLMDemos
@TinyLLMDemos 7 ай бұрын
how do i kick it off?
@MACH_SDQ
@MACH_SDQ 7 ай бұрын
Goooooood
@CS.-ph2fr
@CS.-ph2fr 5 ай бұрын
how to add more than 4 images
@0x0abb
@0x0abb 18 сағат бұрын
I maybe missing something but the workflow is different so it's not working
@Adrianvideoedits
@Adrianvideoedits 7 ай бұрын
you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.
@7xIkm
@7xIkm 6 ай бұрын
idk maybe a seed? efficiency nodes?
@rudyNok
@rudyNok 3 ай бұрын
Hey man, not sure, but looks like there's this node in the workflow called Seed (rgthree) and it seems clicking the bottom button on this node called Use last queued seed does the trick. Try it.
@Blaqk_Frozste
@Blaqk_Frozste 3 ай бұрын
I copied pretty much everything you did and my animation outputs looks super low quality?
@rooqueen6259
@rooqueen6259 7 ай бұрын
Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c
@creed4788
@creed4788 7 ай бұрын
Vram required?
@Adrianvideoedits
@Adrianvideoedits 7 ай бұрын
16gb for upscaled
@creed4788
@creed4788 7 ай бұрын
@@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?
@Adrianvideoedits
@Adrianvideoedits 7 ай бұрын
@@creed4788 I dont see why not. But upscaling itself takes most vram so you would have to find upscaler for lower vram cards
@WalkerW2O
@WalkerW2O 7 ай бұрын
Hi Abe aTech, very informative and i like your work very much.
@artificiallyinspired
@artificiallyinspired 5 ай бұрын
"it's nothing too intimidating" then continues to show a workflow that takes up the entire screen. Lol! thanks for this tutorial, i've been looking for something like this days now. I'm switching from A1111 to comfy UI and the changes are a little more intimidating to get a handle on things than I originally expected. Thanks for this.
@artificiallyinspired
@artificiallyinspired 5 ай бұрын
I get this weird error when it gets to the controlnet, not sure if you know whats wrong? 'ControlNet' object has no attribute 'latent_format', I have the qrcode control net loaded.
@eyoo369
@eyoo369 5 ай бұрын
@@artificiallyinspired Make sure its the same name. A good habit I always do when loading new workflows is to go through all the nodes where you select a model or Lora and make sure the one I have locally is checked. Not everyone follows the same naming conventions. Sometimes you might download a workflow and someone has their ipadapter named "ip-adapter_plus.safetensors" while yours is "ip-adapter-plus.safetensors". Always good to re-select
@pro_rock1910
@pro_rock1910 7 ай бұрын
❤‍🔥❤‍🔥❤‍🔥
@ErysonRodriguez
@ErysonRodriguez 8 ай бұрын
noob question: why my results more different from my output
@ErysonRodriguez
@ErysonRodriguez 8 ай бұрын
i mean, what images i loaded have different output instead transitioning
@abeatech
@abeatech 8 ай бұрын
The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module
@人海-h5b
@人海-h5b 8 ай бұрын
Help! I encountered this error while running it
@人海-h5b
@人海-h5b 8 ай бұрын
Error occurred when executing IPAdapterUnifiedLoader : module 'comfy.model base’ has no attribute 'SDXl instructpix2pix
@abeatech
@abeatech 8 ай бұрын
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@Halfgawd_Halfdevil
@Halfgawd_Halfdevil 7 ай бұрын
@@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?
@vivektyagi6848
@vivektyagi6848 2 ай бұрын
Awesome but could you slow it down please.
@nonprofit7163
@nonprofit7163 5 ай бұрын
did anyone else run into some errors while following this video?
@3djramiclone
@3djramiclone 7 ай бұрын
This is not for beginners, put that on the description mate
@kaikaikikit
@kaikaikikit 7 ай бұрын
what are you are crying about...go find a beginner class when it's too hard to understand...
@suetologPlay
@suetologPlay 5 ай бұрын
Вообще ни чего не понятно что ты там делал! быстр быстро прокликал и смотрите что у меня получилось. куда,чего,как не показал.
@anthonydelange4128
@anthonydelange4128 6 ай бұрын
its morbing time...
@goran-mp-kamenovic6293
@goran-mp-kamenovic6293 5 ай бұрын
urred when executing CheckpointLoaderSimple: 'model.diffusion_model.input_blocks.0.0.weight' File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI odes.py", line 516, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 511, in load_checkpoint_guess_config model_config = model_detection.model_config_from_unet(sd, diffusion_model_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 239, in model_config_from_unet unet_config = detect_unet_config(state_dict, unet_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 120, in detect_unet_config model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0] ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :P
@financialjourney4u
@financialjourney4u 13 күн бұрын
Thanks for this I've followed the steps shown but seeing this erorr messg what am I doing wrong here Failed to validate prompt for output 53: * CheckpointLoaderSimple 564: - Value not in list: ckpt_name: 'SD1.5\juggernaut_reborn.safetensors' not in ['dreamshaper_8.safetensors', 'flux1-schnell-bnb-nf4.safetensors', 'juggernaut_reborn.safetensors', 'realvisxlV50_v50LightningBakedvae.safetensors', 'revAnimated_v2Rebirth.safetensors'] * LoraLoaderModelOnly 563: - Value not in list: lora_name: 'SD1.5\Hyper-SD15-8steps-lora.safetensors' not in ['AnimateLCM_sd15_t2v_lora.safetensors', 'Hyper-SD15-8steps-lora.safetensors', 'flux1-redux-dev.safetensors', 'v3_sd15_adapter.ckpt', 'vae-ft-mse-840000-ema-pruned.ckpt'] Output will be ignored
@zems_bongo
@zems_bongo 6 ай бұрын
i don't understand why its doesnt work with me, i get this type of messages Error occurred when executing CheckpointLoaderSimple: 'NoneType' object has no attribute 'lower' File "/home/ubuntu/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/ubuntu/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/ubuntu/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/home/ubuntu/ComfyUI/nodes.py", line 516, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) File "/home/ubuntu/ComfyUI/comfy/sd.py", line 446, in load_checkpoint_guess_config sd = comfy.utils.load_torch_file(ckpt_path) File "/home/ubuntu/ComfyUI/comfy/utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):
@miukatou
@miukatou 6 ай бұрын
I'm sorry, I need help. I'm a complete beginner. I can't find any sd 1.5 model any . Where to download it? ipadapter,I cannot find this folder in my model path. Do I need to create a folder named ipadapter myself?🥲🥲
@amunlevy2721
@amunlevy2721 7 ай бұрын
Getting these errors that nodes are missing even when installed IP Adapter Plus... missing nodes: IPAdapterBatch and IPAdapterUnifiedLoader
@white_friend
@white_friend 6 ай бұрын
try to 'update all' in Manager Menu
@xionnine
@xionnine 3 ай бұрын
I'm havin the same issue
ComfyUI And CogVideoX AI Video Extend Using Local AI Tools
18:18
Future Thinker @Benji
Рет қаралды 10 М.
This free AI video generator crushes everything
39:11
AI Search
Рет қаралды 148 М.
How Many Balloons To Make A Store Fly?
00:22
MrBeast
Рет қаралды 199 МЛН
To Brawl AND BEYOND!
00:51
Brawl Stars
Рет қаралды 17 МЛН
1% vs 100% #beatbox #tiktok
01:10
BeatboxJCOP
Рет қаралды 60 МЛН
This free MIND BLOWING Workflow Just Changed Filmmaking
20:54
Mickmumpitz
Рет қаралды 100 М.
FaceID Take 2! Even more face models! (IPAdapter+ComfyUI)
12:54
Latent Vision
Рет қаралды 45 М.
Reimagine Any Image in ComfyUI
10:25
How Do?
Рет қаралды 18 М.
10 AI Animation Tools You Won’t Believe are Free
16:02
Futurepedia
Рет қаралды 540 М.
ComfyUI Tutorial Series: Ep04 - IMG2IMG and LoRA Basics
17:26
Master AI image generation - ComfyUI full tutorial 2024
1:18:44
AI Search
Рет қаралды 110 М.
ComfyUI Tutorial Series: Ep01 - Introduction and Installation
21:16
How Many Balloons To Make A Store Fly?
00:22
MrBeast
Рет қаралды 199 МЛН