At first i didn't understand why you make this part 06:35 Supercharge the Workflow, but after getting a MemoryError now i know what to do, we need more thinkers like you
@Hebrideanphotography5 ай бұрын
People like you are so important. Too many gatekeepers out there. ❤
@ZergRadio27 күн бұрын
I really thought this was just gonna be junk like so many other "Video/animation" ones I already tried. And I am very impressed by it, simply because it worked. And my video came out really nice. Subscribed!
@AI.Studios.4UАй бұрын
Thanks to you I have created my first video using ComfyUI! Your video is priceless!
@gorkemtekdal8 ай бұрын
Great video! I want to ask that can we use init image for this workflow like we do on Deforum? I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts. Do you know how does it possible on ComfyUI / AnimateDiff? Thank you!
@abeatech8 ай бұрын
I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.
@jdsguam5 ай бұрын
I've been having fun with this workflow for a few days already. It is amazing what can be done on a laptop in 2024.
@ted3288 ай бұрын
Literally the answer to my prayers, have been looking for exactly this for MONTHS
@1010mrsBАй бұрын
You're amazing!! I was lost for so long and when I found this video I was found
@CoqueTornado8 ай бұрын
great tutorial, I am wondering... how many vram does this setup need?
@abeatech8 ай бұрын
i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi
@CoqueTornado8 ай бұрын
@@abeatech thank you!! will try the two suggestions! congrats for the channel!
@alessandrogiusti19497 ай бұрын
After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!
@RokSlana3 ай бұрын
This looks awesome. I gotta give it a try asap. Thanks for sharing.
@paluruba7 ай бұрын
Thank you for this video! Any idea what to do when the videos are blurry?
@jesseybijl21047 ай бұрын
Same here, any answer?
@EternalAI-v9bАй бұрын
Hello, how did you make that effect with your eyes at 0:20 please?
@stinoway3 ай бұрын
Great video!! Hope you'll drop more knowledge in the future!
@retrotiker4 ай бұрын
Great tutorial! Your content is super helpful. Just wondering, where are you these days? We'd love to see more Comfy UI tutorials from you!
@andrruta8685 ай бұрын
I get too fast transitions between images. I did not find where you can adjust the transition time. I will be grateful for the advice.
@SAMEGAMANАй бұрын
Thank you for this video❤❤
@AlderoshActual-z3k5 ай бұрын
Awesome tutorial! I've been getting used to the ComfyUI workflow...love the batch image generation!! However, do you have any tips on how to make LONGER text to video animations? I've seen several YT channels that have very long format morphing videos...well over an hour. I'd like to create videos that average around 1 minute, but can't sort out how to do it!
@TechWithHabbz8 ай бұрын
You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁
@abeatech8 ай бұрын
Thanks for the sub!
@SylvainSangla7 ай бұрын
Thanks a lot for sharing this, very precise and complete guide ! 🥰 Cheers from France !
@GNOM_3 ай бұрын
Hello! Big thanks to you, bro. I learned how to make different animations from your video. I watched many other tutorials, but they didn't work for me. You explained everything very clearly. Tell me, can I insert motion masks myself, or do I have to insert link addresses only? Are there any other websites with different masks? Greetings from UKRAINE!!!
@tadaizm3 ай бұрын
Розібрався?
@GNOM_2 ай бұрын
@@tadaizm так, розібрався. Просто скопіювати свою маску як путь і вставити.Нажаль масок мало.Скачати інщі маски теж та щє проблема, фіг знадеш.
@user-yo8pw8wd3z7 ай бұрын
good video. where can i find the link to the additional video masks? I don't see it in the description
@hoptoad6 ай бұрын
this is great! do you know if there is a way to "batch" many variations where you can give each of the four guidance images a folder and it will run through and do a new animation with different source images multiple times?
@Ai_mayyit7 ай бұрын
Error occurred when executing VHS_LoadVideoPath: module 'cv2' has no attribute 'VideoCapture' your video timestep: 04:20
@SF80087 ай бұрын
Amazing! Thanks a lot for this!!! btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)
@EmoteNation4 ай бұрын
Bro u r doing really good job, i hav only one question, in this video u did image to video morphing so can u do video to video morphing? Or can u make morphing video by using only text / prompt?
@mcqx48 ай бұрын
Nice tutorial, thanks!
@abeatech8 ай бұрын
Glad it was helpful!
@juliensylvestreeee3 ай бұрын
Nice tutorial, even if it was very hard for me to set this up. Which SD 1.5 model do you recommand to install ? I just wanna morph input images, and a very realistic render. If someone could help :3
@Injaznito17 ай бұрын
NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!
@lucagenovese72075 ай бұрын
Insane!!!!! Ty so much!
@yannickweineck4302Ай бұрын
in my case it doesnt really use the images i feed it. I already tried to find all the settings which result in almost no morph and basically all 4 original images standing still but i cant seem to find them.
@pedrobrandao76645 ай бұрын
Great tutorial
@petertucker4556 ай бұрын
Hi Abe, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?
@人海-h5b8 ай бұрын
Help! I encountered this error while running it Error occurred when executing IPAdapterUnifiedLoader: Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'
@abeatech8 ай бұрын
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@ComfyCott7 ай бұрын
Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!
@chinyewcomics6 ай бұрын
Hi, does anybody know how to add more images to create a longer video?
@Caret-ws1wo6 ай бұрын
Hey, my animations come out super blurry and are no where near as clear as yours. I can barely make out the monkey, it's just a bunch of moving brown lol. Is there a reason for this?
@DanielMatotekКүн бұрын
Same did you ever figure it out
@Caret-ws1wo16 сағат бұрын
@@DanielMatotek This was a while ago, but i believe I changed models
@goran-mp-kamenovic62935 ай бұрын
5:30 what do you do to see the duration :)
@evgenika20136 ай бұрын
Everything is great, but i have blurry result on my horizontal artwork. Any suggestion what to check on it?
@aslgg81148 ай бұрын
What should I do to make the reference image persistent
@Danaeprojectful2 ай бұрын
hi, I would like the first and last frames to exactly match the images I uploaded without being reinterpreted. Is this possible? In the case how should I do it? Thanks
@MariusBLid8 ай бұрын
Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram
@produccionesvoid6 ай бұрын
when i put on manager install missing nodes i cant do it and said: To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser... what can do that?
@MSigh7 ай бұрын
Excellent! 👍👍👍
@Murdalizer_studios5 ай бұрын
nice bro. Thank you🖖
@frankiematassa16897 ай бұрын
Error occurred when executing IPAdapterBatch: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this
@juginnnn3 ай бұрын
how can I fix "Motion module 'AnimateLCM_sd15_t2v.ckpt' is intended for SD1.5 models, but the provided model is type SD3."???
@damird96356 ай бұрын
Working, but when i select "plus high strenght", i get clip vision error. What im i missing, i downloaded everything.... VIT-G is the problem for some reason?
@Halfgawd_Halfdevil7 ай бұрын
Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?
@tetianaf51727 ай бұрын
Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help
@Cats_Lo_Ve6 ай бұрын
How i can get progress bar like you on top of the screen? I must reainstall full comfy UI for this workflow. I instaled crystools but progress bar doesn't appear on top :/ Thank you for your video you are a god!
@GiancarloBombardieri6 ай бұрын
it worked so fine. but now it sends an error at the Load video Path, is there any update??
@randomprocess78762 ай бұрын
Anybody know how to scale this to more than 4 images.. ive tried but the masks are messing up the animation from the cloned nodes
@randomprocess78762 ай бұрын
want to make longer videos
@cabb_8 ай бұрын
ipiv did an incredible job with this workflow!. Thanks for the tutorial.
@SapiensVirtus6 ай бұрын
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
@velvetjones86348 ай бұрын
Very helpful, thanks!
@abeatech8 ай бұрын
Glad it was helpful!
@kwondiddy7 ай бұрын
I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:" I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?
@axxslr88627 ай бұрын
in my comfy UI there is no manager option ...... help please
@ESLCSDivyasagar7 ай бұрын
search in youtube how to install
@ollyevans6365 ай бұрын
i don't have an ipadapter folder in my models folder, should i just make one?
@AlexDisciple6 ай бұрын
Thanks for this. Do you know what could be causing this error : Error occurred when executing KSampler: Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 64, 36] to have 5 channels, but got 4 channels instead
@AlexDisciple6 ай бұрын
I figured out the problem, I was using the wrong ControlNet. I am having a different issue though, where my initial output is very "noisy", as if ther was latent noise all over it. Is it imporant for the source images to be in the same aspect ratio as the output?
@AlexDisciple6 ай бұрын
Ok found the solution here too, I was using a photorealistic model, which somehow the workflow doesn't seem to like. Switching to juggernaut fixed it
@ywueeee7 ай бұрын
can could one add some kind of ip adaptar to add your own face to transform?
@saundersnp7 ай бұрын
I've encountered this error : Error occurred when executing RIFE VFI: Tensor type unknown to einops
@yomi0neАй бұрын
copying video address of the animation doesn't work, it copies an .webm link, please help :(
@MichaelL-mq4uw8 ай бұрын
why do you need controlnet at all? can it be skipped and morph without any mask?
@devoiddesign7 ай бұрын
Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node. !!! Exception during processing!!! IPAdapter model not found. Traceback (most recent call last): File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models raise Exception("IPAdapter model not found.") Exception: IPAdapter model not found.
@tilkitilkitam7 ай бұрын
same problem
@tilkitilkitam7 ай бұрын
ip-adapter_sd15_vit-G.safetensors - install this from the manager
@devoiddesign7 ай бұрын
@@tilkitilkitam Thank you for responding. I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.
@efastcruex7 ай бұрын
Why my generated animation very different from the reference images
@cohlsendk7 ай бұрын
Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''
@cohlsendk7 ай бұрын
Got it :D
@ellopropello4 ай бұрын
how awesome is that! but what needs to be done to get rid of these errors: When loading the graph, the following node types were not found: ADE_ApplyAnimateDiffModelSimple VHS_SplitImages SimpleMath+ ControlNetLoaderAdvanced ADE_MultivalDynamic VHS_VideoCombine BatchCount+ ADE_UseEvolvedSampling FILM VFI RIFE VFI Color Correct (mtb) VHS_LoadVideoPath IPAdapterUnifiedLoader ACN_AdvancedControlNetApply ADE_LoadAnimateDiffModel ADE_LoopedUniformContextOptions IPAdapterAdvanced CreateFadeMaskAdvanced
@yakiryyy8 ай бұрын
Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images. The results I get are pretty different from the reference images. Am I wrong in my assumption?
@abeatech8 ай бұрын
You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation
@efastcruex7 ай бұрын
@@abeatech Is there any way to make the result more like reference images
@ImTheMan7257 ай бұрын
Why can't your morph 20/50 pictures?
@CarCrashesBeamngDrive7 ай бұрын
cool, how long did it take you?
@TinyLLMDemos7 ай бұрын
where do i get your input images
@rowanwhile8 ай бұрын
Brilliant video. thanks so much for sharing your knowledge.
@rayzerfantasy3 ай бұрын
How much GPU VRAM is needed?
@balibike90244 ай бұрын
I've got an error message Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 573, in load_models raise Exception("IPAdapter model not found.") What shoud I do ?
@balibike90244 ай бұрын
Success now ! I re-install ip-adapter_sd15_vit-G.safetensors from the manager
@zarone92707 ай бұрын
thx Abe!
@DanielMatotekКүн бұрын
Tried for ages couldn't make it work, every image is very pixelated and crazy cannot wor it out
@TinyLLMDemos7 ай бұрын
how do i kick it off?
@MACH_SDQ7 ай бұрын
Goooooood
@CS.-ph2fr5 ай бұрын
how to add more than 4 images
@0x0abb18 сағат бұрын
I maybe missing something but the workflow is different so it's not working
@Adrianvideoedits7 ай бұрын
you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.
@7xIkm6 ай бұрын
idk maybe a seed? efficiency nodes?
@rudyNok3 ай бұрын
Hey man, not sure, but looks like there's this node in the workflow called Seed (rgthree) and it seems clicking the bottom button on this node called Use last queued seed does the trick. Try it.
@Blaqk_Frozste3 ай бұрын
I copied pretty much everything you did and my animation outputs looks super low quality?
@rooqueen62597 ай бұрын
Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c
@creed47887 ай бұрын
Vram required?
@Adrianvideoedits7 ай бұрын
16gb for upscaled
@creed47887 ай бұрын
@@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?
@Adrianvideoedits7 ай бұрын
@@creed4788 I dont see why not. But upscaling itself takes most vram so you would have to find upscaler for lower vram cards
@WalkerW2O7 ай бұрын
Hi Abe aTech, very informative and i like your work very much.
@artificiallyinspired5 ай бұрын
"it's nothing too intimidating" then continues to show a workflow that takes up the entire screen. Lol! thanks for this tutorial, i've been looking for something like this days now. I'm switching from A1111 to comfy UI and the changes are a little more intimidating to get a handle on things than I originally expected. Thanks for this.
@artificiallyinspired5 ай бұрын
I get this weird error when it gets to the controlnet, not sure if you know whats wrong? 'ControlNet' object has no attribute 'latent_format', I have the qrcode control net loaded.
@eyoo3695 ай бұрын
@@artificiallyinspired Make sure its the same name. A good habit I always do when loading new workflows is to go through all the nodes where you select a model or Lora and make sure the one I have locally is checked. Not everyone follows the same naming conventions. Sometimes you might download a workflow and someone has their ipadapter named "ip-adapter_plus.safetensors" while yours is "ip-adapter-plus.safetensors". Always good to re-select
@pro_rock19107 ай бұрын
❤🔥❤🔥❤🔥
@ErysonRodriguez8 ай бұрын
noob question: why my results more different from my output
@ErysonRodriguez8 ай бұрын
i mean, what images i loaded have different output instead transitioning
@abeatech8 ай бұрын
The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module
@人海-h5b8 ай бұрын
Help! I encountered this error while running it
@人海-h5b8 ай бұрын
Error occurred when executing IPAdapterUnifiedLoader : module 'comfy.model base’ has no attribute 'SDXl instructpix2pix
@abeatech8 ай бұрын
Sounds like it could be a couple of things: a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5 or b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)
@Halfgawd_Halfdevil7 ай бұрын
@@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?
@vivektyagi68482 ай бұрын
Awesome but could you slow it down please.
@nonprofit71635 ай бұрын
did anyone else run into some errors while following this video?
@3djramiclone7 ай бұрын
This is not for beginners, put that on the description mate
@kaikaikikit7 ай бұрын
what are you are crying about...go find a beginner class when it's too hard to understand...
@suetologPlay5 ай бұрын
Вообще ни чего не понятно что ты там делал! быстр быстро прокликал и смотрите что у меня получилось. куда,чего,как не показал.
@anthonydelange41286 ай бұрын
its morbing time...
@goran-mp-kamenovic62935 ай бұрын
urred when executing CheckpointLoaderSimple: 'model.diffusion_model.input_blocks.0.0.weight' File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI odes.py", line 516, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 511, in load_checkpoint_guess_config model_config = model_detection.model_config_from_unet(sd, diffusion_model_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 239, in model_config_from_unet unet_config = detect_unet_config(state_dict, unet_key_prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 120, in detect_unet_config model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0] ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :P
@financialjourney4u13 күн бұрын
Thanks for this I've followed the steps shown but seeing this erorr messg what am I doing wrong here Failed to validate prompt for output 53: * CheckpointLoaderSimple 564: - Value not in list: ckpt_name: 'SD1.5\juggernaut_reborn.safetensors' not in ['dreamshaper_8.safetensors', 'flux1-schnell-bnb-nf4.safetensors', 'juggernaut_reborn.safetensors', 'realvisxlV50_v50LightningBakedvae.safetensors', 'revAnimated_v2Rebirth.safetensors'] * LoraLoaderModelOnly 563: - Value not in list: lora_name: 'SD1.5\Hyper-SD15-8steps-lora.safetensors' not in ['AnimateLCM_sd15_t2v_lora.safetensors', 'Hyper-SD15-8steps-lora.safetensors', 'flux1-redux-dev.safetensors', 'v3_sd15_adapter.ckpt', 'vae-ft-mse-840000-ema-pruned.ckpt'] Output will be ignored
@zems_bongo6 ай бұрын
i don't understand why its doesnt work with me, i get this type of messages Error occurred when executing CheckpointLoaderSimple: 'NoneType' object has no attribute 'lower' File "/home/ubuntu/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/ubuntu/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/ubuntu/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/home/ubuntu/ComfyUI/nodes.py", line 516, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) File "/home/ubuntu/ComfyUI/comfy/sd.py", line 446, in load_checkpoint_guess_config sd = comfy.utils.load_torch_file(ckpt_path) File "/home/ubuntu/ComfyUI/comfy/utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):
@miukatou6 ай бұрын
I'm sorry, I need help. I'm a complete beginner. I can't find any sd 1.5 model any . Where to download it? ipadapter,I cannot find this folder in my model path. Do I need to create a folder named ipadapter myself?🥲🥲
@amunlevy27217 ай бұрын
Getting these errors that nodes are missing even when installed IP Adapter Plus... missing nodes: IPAdapterBatch and IPAdapterUnifiedLoader