Nice, so many workflows, so little time. You're killing it on Thursdays! Much thanks.
@dadekennedy97122 ай бұрын
Aye!! Thanks for the great work!
@nicolasmarnic39924 күн бұрын
Hello. Is it strictly necessary to put a mask in this workflow?
@dmsa2Ай бұрын
I'm just starting out in this process, I ask you, is there somewhere where I can download a folder with all the modules and loras and controls already pre-installed, like a portable version? I tried for 5 hours to get the installations to work and nothing worked. Thanks. Eu sou iniciando nesse processo, te pergunto, tem algum lugar onde eu posso baixar uma pasta com todos os modulos e loras e controles já pré-instalados, tipo uma versão portable ? Eu tentei por 5h fazer as instalações funcionarem e nada deu certo. Obrigado.
@glillemonАй бұрын
this workflow is sick, does anybody have a suggestion on how to have the background stay unchanged, but have some options on the mask blurriness to allow say a mask to blend the stable diffusion/ animfiff into the live action, I think thats what QR monster code is for, but if any suggestions I would appreciate it!
@Cybernaut7777Ай бұрын
Thx for your great work! I'm having an issue: background doesn't seem to be taking too much inspiration from the "IPAdapter (Background)" section and instead (mostly) copies the original video despite the SAM mask. What am I doing wrong? thx
@civitaiАй бұрын
You're going to want to mess around with the controlnet combos. The most creative controlnet combo is depth + openpose + controlgif. I'd keep controlgif at 0.4 and not go above that though. Depending on what your original input video is, you will also want to try to find IP images that have at least some of the same context for the BG. At the end of the day, it's going to trace what is already in the source video.
@PeteStueveАй бұрын
Super good match civit and you , I actually like civit now a lot more . I'll have to check out a twitch one day
@civitaiАй бұрын
Wow, what a compliment! Thanks so much! Come join the twitch streams, we always have lots of fun on there and the chat is super friendly and knowledgable!
@ceaselessvibing59972 ай бұрын
@Civitai, would this work for Pony Based models, (I tried and suffered a bit) I had some funky issues trying to generate just one image/frame and it wasnt producing the images i was expecting from the image i provided. without going into too many workflow shenanigans... what are the current model limitations for doing this kind of thing? i.e is it only SD1.5, etc? without blowing up my brain too xD?
@civitaiАй бұрын
This is only compatible with 1.5 and 1.5 LCM models. I’d recommend using the LCM 1.5 LoRA at a strength of 1.0 for non LCM models and a strength of 0.18 for LCM models 🙏🏽🫶🏽
@AshChoudharyАй бұрын
hi, I am facing a strange error, the image node in VideoCombine is not taking any inputs from any image node. What could be the issue? Great workflow btw
@zenmikuАй бұрын
I'm facing the same issue after updating ComfyUI
@AshChoudharyАй бұрын
@@zenmiku i got no permanent change but cloning the VideoCombine node worked for me. just had to reconnect the image and filename prefix nodes
@jittthooce22 күн бұрын
@@AshChoudhary how did you fix the filename prefix issue? Failed to validate prompt for output 252: * (prompt): - Required input is missing: filename_prefix The only input options I see are: images, audio, meta_batch, and vae.
@jittthooce22 күн бұрын
nvm, fixed it
@musyc1009Ай бұрын
great job as always bro, quick question, is there a reason why the output video is always 1 second short than the input video ? i didn't skip any frames nor put a frame cap on it
@civitaiАй бұрын
hmmm, I never have that problem so i'm not entirely sure. I'm sorry :/
@musyc1009Ай бұрын
@@civitai yea i googled and couldn't find anything about that , it's just weird and still does that for some reason🤔 it's always 1 second shorter
@Zany-g3hАй бұрын
I can't load your workflow inside comfy - it gives me error message - Reactorfaceswap , i've tried to do a fix but getting this message - ReActor Node for ComfyUI fix failed: This action is not allowed with this security level configuration
@LuckyGuyAEАй бұрын
I have same issue bro, I don't know how to fix it too
@gardentv7833Ай бұрын
Apply ControlNet Stack 'NoneType' object has no attribute 'copy' I got this error message, any clue sir?
@civitaiАй бұрын
ControlNets in the proper folders? Hard to tell without seeing what you got going on
@jittthooceАй бұрын
I can get 4-5 sec outputs without any issues on a 4090 and 30 gigs of RAM on Runpod. However, when I tried to do a 15 sec video, it straight up kills the process right after the controlnet processing. Any tips to get it work on slightly longer videos? I mean isn't that enough computing power to generate 10-15 second animations? Thanks for the updated workflow btw. You are doing a great service by putting these things together for people who aren't able to sit and figure these out on their own.
@civitaiАй бұрын
I am able to do up to 1000 frames at a time with my 4090, but i'm doing it locally. I'm not entirely sure, but that sounds like it could be on runpod's side. Sounds like that key difference is local vs cloud.
@theflowbeta16042 ай бұрын
Hello, for a few days now I have been claiming the daily buzz restart, but when I claim, the daily buzz never adds up. Before all this everything was perfect, but now it is not anymore. I have seen that the same thing happens to other people, can you fix it?
@civitaiАй бұрын
Feel free to reach out to us via our support email or in discord 🙏🏽
@Caret-ws1woАй бұрын
Is there a way to only diffuse certain parts of the mask? i.e only generate on the white and leave the background black?
@civitaiАй бұрын
After you cut out your character, try using a solid black frame in your background ipadapter and prompting for a black background :)
@Caret-ws1woАй бұрын
@@civitai Perfect, thanks!
@lucifer9814Ай бұрын
I always get an error with the reactor node, it basically says " Error loading reactor node " regardless of installing it or even fixing it
@civitaiАй бұрын
In that case, i'd just delete it. I can't remember the last time I sued it and I just have it there as a "just in case". But tbh, it's not worth the wrestling probably
@lucifer9814Ай бұрын
@@civitai So this whole workflow would work even without the reactor node is it ?
@pookienumnums2 ай бұрын
Yuh! (first too! on my bday as well! holla)
@Lucy-z5dАй бұрын
Great workflow. However, it looks like without a 4080 or 4090 it will take forever just to get a 5sec video output.
@civitaiАй бұрын
Unfortunately it is not low VRAM friendly. This workflow will take 12-15gb at least to run because of the mask and the ip adapters
@Lucy-z5dАй бұрын
@@civitai thank you for reminding me. is there another version for 8gb?
@zuzuumaibamАй бұрын
ByPass button is not there i even update the comfyui, is there anything I am missing
@civitaiАй бұрын
There is a little button in the top right corner of each group. Just click it :)
@TheTruthIsGonnaHurtАй бұрын
Right Click anywhere on screen to bring up menu prompt. Scroll to RGThree Comfy. Click on Settings. Scroll down to (Groups) Show Fast Toggles in Group Headers, Select Toggle: Bypass and Show: Always. Once you do that, then the ByPass Button will appear in top right corner of each group.
@Scherzify14 күн бұрын
Please review your workflow seems to be broken nodes links are broken.
@amersenlu875Ай бұрын
has anybody gotten "sampler(efficient) "noneType" object has no attribute "shape"? I downloaded controlnets, checkpoints and loras like in the video and I get this error. Help
@amersenlu875Ай бұрын
ok, I figured the problem is with the linear contorlnet
@amersenlu875Ай бұрын
made it work, but it doesn't seem to be taking the background I chose
@alexhowe4775Ай бұрын
far too much yapping, but good information nonetheless.
@OptimBroАй бұрын
" most important part of what we do....CREATING!" 🥹🥹
@fr0zen1isshadowbanned99Ай бұрын
Comfy is bad as always. Videos don't show the selected parts, frames, framerates... Nodes are not properly connected. FaceSwap as always is broken. Videos can't be created with the used Node. Videos can't be played in any other Format than webm. How long am I trying to use this Dumpster-fire of a Ui now? 2 years? 2 years and still very similar problems to when I first had the displeasure of trying it. And btw, I switched PC in the meantime, so that is not the problem! You tried and I thank you for that :) One time it even worked very good until the "Updates" arrived ^^ And it seemed to work today too, just after doing lots of disconnecting and swapping to normal Nodes, and without Video Settings working. One day they will release Sora and maybe at that point there will be a good Ui ^^ Maybe... but likely not xD
@nirdeshshrestha905629 күн бұрын
can you send your fixed workflow please i am having a headache
@fr0zen1isshadowbanned9929 күн бұрын
@@nirdeshshrestha9056 Have you received the Link? I don't know how YT will handle putting Links in the Comments.
@fr0zen1isshadowbanned9928 күн бұрын
@@nirdeshshrestha9056 But don't expect too much :) I had to do a quick fix on the Workflow and about half an Hour was spent on Linking it somehow to you ^^ If you got Questions, ask away... and don't forget that you need all the ControlNet Models(btw: play around with them. Try switching one to Open Pose, that's better for me mostly) and the IP Adapter + Pytorch Model. jboogx has a Setup Guide somewhere. But I don't know where.
@fr0zen1isshadowbanned9928 күн бұрын
@@nirdeshshrestha9056 Ok... I tried Linking it here but NO CHANCE! Not even giving you my Mail. AND THIS MESSAGE WAS DELETED TOO!!!! So you have to go to the Link in the Video Description and look there for my Comment under his Workflow. My Name there is FrozenGT. I hope that worked now xD... I have spent over an Hour on this now D:
@OtchengazoomАй бұрын
Yes, we like wifus 🤗😍🤘
@civitaiАй бұрын
This we do, my friend. This we do. Go make a cool one and share it!
@lofigamervibesАй бұрын
Oh my god, how did you know I like Waifu's?? That's crazy, you're so right, though. 😇
@civitaiАй бұрын
Lucky guess :P
@jonrich9675Ай бұрын
Still 100% lost. I use flux and this 100% doesn't help me out at all. Like I just started comfyui like 10 days ago and already know that you MUST use only certain models with certain IPadapters with certain unets and etc. I just need someone to show me ALL FLUX. This is SD.1.5. but no body in the World has done this with just FLUX yet. Kinda annyoing i now have to swap everything to lame SD models ugh.
@civitaiАй бұрын
Flux does not have a working motion model yet so there is no way to do clean vis2vid style transfers with it just yet. We are sure there will be one, but it has not been released yet. This workflow is only for SD1.5. We also have a tutorial from a few weeks back with Inner Reflections showing how to use his SDXL workflow.
@jonrich9675Ай бұрын
@@civitai can..can i just give u buzz so you can train one?
@Zerod-rn3yeАй бұрын
@@jonrich9675 Unfortunately, it does not work that way. The issue is that Flux is too new (a few months old) while SD is very established for several years now and thus has way more tools and knowledge established among the community, researchers, and businesses. Civitai does not make any of this tech, they merely act as a host of models. Models like Flux cost hundreds of millions to produce. As for merging/training checkpoints, as seen with SD, no one has been able to figure out how to do with with Flux yet (currently, checkpoint merges with Flux are something else entirely in that they're highly prone to degraded visuals, prompt adherence, major issues with Lora, and frequently crash for most users... thus no major checkpoint has gained popularity over base model yet). Rather, the focus is on Lora's with Flux, at the moment at least. It isn't even known, and is heavily doubted, we'll ever even seen proper checkpoint releases beyond base Flux considering how it works, unlike SD. Now, the ones who do make these models are individual companies spending hundreds of millions to develop these models while the tools like Controlnet, etc. are developed by researchers and sometimes very capable members of the community (usually extending on researcher's shared open source results to then implement it in ComfyUI, etc.). In short, it will take more time for Flux support to grow. However, you can still generate images in Flux and then transfer them to SD for certain processes to refine or generate additional content off this just like you can do with real world photos via img2img, etc.