I'm confused. Was it because you used to general trigger word that it just didn't generate the woman you trained on initially? A woman generated a red head when you trained a blonde haired woman. To actually get it to activate the model you trained on I noticed you had to being up the blue clothing first.
@TheFutureThinker3 сағат бұрын
General word for trigger keyword are only getting alike result from dataset. So it will have freedom in some image to have elements from the base model. Other training style focus for characters only, and focus it to only getting style from dataset: kzbin.info/www/bejne/Z3Xcl4OeftCLbZosi=5dxGilp7iUIx4Vfu
@derrickpang430410 сағат бұрын
This is great
@incrediblekullu793222 сағат бұрын
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\ishwr\\Music\\fluxgym\\outputs\\lora1\\sample' still tried alot but getting same error
@aivideos322Күн бұрын
Nicely done
@arthriticКүн бұрын
Great stuff right here! Go to for AI content. TY Benji. ❤
@michaellong5871Күн бұрын
The model face changed and so does the background due to the sampling in the latent space. But I thought if you provide mask, the resampling is only happening in the mask region of the image, so the rest of the original picture will not change?
@arimiarts9233Күн бұрын
It's all working fine but when I set the frames number above 49 I get some sort of mosaic tile output. Is there any way to make this work. I really need this to run longer.
@pawelthe1606Күн бұрын
FileNotFoundError: [WinError 3] The system cannot find the path specified, i got this error anyone can help please?
@rawnclarkКүн бұрын
I get error "sizes of tensors must match except in dimension 2"
@BoomBillionКүн бұрын
😅 was great then she detached her fingers.
@Vedavan.2 күн бұрын
Ничего не понял но ставлю лайк ))
@TheFutureThinkerКүн бұрын
@@Vedavan. нет проблем, просто наслаждайтесь процессом анимации ИИ и получайте удовольствие 😉
@aivideos3222 күн бұрын
Have you tried this lately? They have added image batch list processing now. I am trying to train 2 video sources, not had much luck have you tried it?
@TheFutureThinkerКүн бұрын
No, i am busy on the newer AI models.
@Martin-bx1et2 күн бұрын
Definitely refreshing compared to all the dancing girls but still seems like witchcraft!
@petEdit-h9l2 күн бұрын
Hi benji, pls can you make a tutorial on this, if you hav, can you link me to it, I can't seem to find it in your channel
@megamayo25002 күн бұрын
You're the first Patreon I've joined. You have my undivided attention.
@TheFutureThinker2 күн бұрын
Have fun. And remember to check out the Discord . also the Public Discord is going wlid.
@megamayo25002 күн бұрын
This was only tested with real people. I'm wondering if it's possible to use this with animated characters as well.
@TheFutureThinker2 күн бұрын
Do you want the anime style, 3D, or illustration?
@hindi_baba2 күн бұрын
love your content bro
@TheFutureThinker2 күн бұрын
Thank you
@insurancecasino57902 күн бұрын
Just be careful bro. Your work would be really valuable to Hollywood. You stuff would literally save production costs. That's value.
@TheFutureThinker2 күн бұрын
Haha 😆😂 actually yes, there's a guy contacted me before from this area. But since I have my business operating , their offer is not satisfy my earning, and AI content in here are just for my hobby. So I rejected.
@insurancecasino57902 күн бұрын
@@TheFutureThinker Word,itis your choice. Depending on the timing, some could be worth millions as the studios are in heavy competition. Like with scripts, they will be focused on AI workflows and improvements that they can lease out for awhile. I notice a lot of the same stuff all at once from them. Same with scripts. Thanks for the vids.
@kalakala48032 күн бұрын
🤫the true is film director don't have money to talk. So no bargaining power and show an attractive offer. Unless ,after they receive funding from the investor. Otherwise, nothing they said before that stage is true.
@insurancecasino57902 күн бұрын
@@kalakala4803 Many directors are also producers. Just like actors that's how they last, and get the good projects.
@TheFutureThinker2 күн бұрын
@@insurancecasino5790 yes my friend ,I understand. There's 2 case like this contacted. I cannot say detail in public. Basically, 1 an individual director now, he was in a some movies before, no studio no resources, expecting to generate everything from his raw draft script. Another was a big corp (the one have a mouse), but offer not attractive for me.
@maikelkat17262 күн бұрын
a little bit more content? with what was this created and how? seems to be a hickup with three hands :) with comfyui but cogvideo ? more details?
@TheFutureThinker2 күн бұрын
Yes, that handpick-up bottle transition was struggling. I was using the start and end frame in Kling AI to create that. A few times, no good transition result. Well, but it was a good idea test for ad prototype.
@forest428212 күн бұрын
This is just faceswap. Not really create a new animation. Very disappointed
@TheFutureThinker2 күн бұрын
Then you are naive, don't know the Video2Video technique works.
@JeremyAikinss3 күн бұрын
1:33 plus once you've got the model, you can make them a Fanvue for extra cash
@paulcrist17623 күн бұрын
Thanks for making this, it was easy to follow for even a noob with an old machine like me :) I did have a question though, and I'm sorry if it's already been asked. My first attempt at MimicMotion was bad, a dancing blob basically. So I started looking for tutorials on how to use it since I'm sure it was something I did wrong. I cam across a video on installing MimicMotion and when I checked the folders I saw that in the Models/MimicMotion folder there's only 1 file, MimicMotionMergedUnet_1-0-fp16 In that other video which was for just plain MimicMotion with no merge, he was downloading several models for MimicMotion. Do we also need to download and install those models, are they not needed, or are they in a different folder already? Thanks again for the great video
@paulcrist17623 күн бұрын
I'm continuing to experiment and this went over my head. I took a picture of the first frame of the video with a screen capture then resized it to have very close to the same number of pixels and used it as my reference picture to test things. I still get a dancing blog and I also got this error message: Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely. mimicmotion I can't find a node to turn off "weights only" I went searching on google an found this well written explanation that went over my head: This warning typically appears when you're using PyTorch 2.4 or newer and trying to load a model with torch.load() while specifying weights_only=True. Here's what it means and how to address it: What's happening: PyTorch 2.4 introduced a security feature: In versions 2.4 and later, torch.load() defaults to a safer loading mechanism that avoids potential vulnerabilities associated with pickle deserialization. weights_only=True is deprecated: The weights_only parameter was intended to load only the model weights and not the entire model state dict. However, this feature has been deprecated in favor of a more secure approach. How to fix: Update your code: Remove the weights_only=True argument from your torch.load() call. Load the full state dict: Load the entire model state dict using torch.load(). If you only need the weights, extract them from the loaded state dict. If you still need to load weights only (less common): Consider the risks: Loading weights only can be less secure as it may bypass important checks. Use a custom loading function: You might need to write a custom loading function that extracts the weights from the state dict manually. Important: Always load models from trusted sources. Loading models from untrusted sources can pose security risks. Stay updated with PyTorch releases. PyTorch updates often include important security improvements. I should mention that I'm using Comfyui Portable and your Freebie work load. I'm mostly a noob with this stuff & with Python and everything related but I do have a high IQ and I learn pretty fast. I changed some of the parameters in the nodes while experimenting but changed them all back again. So does this mean that I have to downgrade my PyTorch because I can't seem to find it in ComfyUI Portable to check the version I'm using.
@唐钰明-n5z3 күн бұрын
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined] What's the problem, please
@TheFutureThinker3 күн бұрын
Please try with GPU.
@唐钰明-n5z3 күн бұрын
@@TheFutureThinker Thank you. What should I do to use the GPU, please
@llirikk853 күн бұрын
Why does the product look unnatural? shadows are needed
@TheFutureThinker3 күн бұрын
This can be add by Flux, just forgot in the record. I will try it later post.
@MAM98-zt9ew3 күн бұрын
How do I do something like this?
@DJTripleRRR3 күн бұрын
Hmmm... Why not add a baseline noise to the entire product layer? So basically it diffuses the bag 20% denoise. That way it should basically still get the same product but more diffused into the final image? Or basically just dont make your mask entirely black, but rather 80% black. Curious if it would work or if it would change too many details? Should also assist with items that need to appear transparent snd show some of the background image, such as clear glass bottles.
@faithful_otaku73394 күн бұрын
Hello, I'm new to lora training and I ran into two small problems after following your video. The first problem is at 12:37, after I run 'python app.py' nothing happens, no browser window is launched so I have to manually type in the local URL. The second problem is at 16:30, on my PC, my output creates another folder in the output folder that contains 6 files: dataset.toml, test-000004.safetensors, test-000008.safetensors, test-000012.safetensors, sample_prompts.txt, and train.bat (.gitkeep is in the previous 'output' folder). I do not have a 'test.safetensors' file with no numbers like you, is this normal? I also wanted to get your opinion on if I should use Fluxgym or AI-Toolkit for lora training? I have a basic RTX 3090 with 24GB of VRAM, but I heard that some people using AI-Toolkit still ran into memory issues with this card, especially if they have multiple monitors like me.
@cemilhaci24 күн бұрын
very successful, it works well on objects, do you think it is possible to produce a workflow that we can use in situations where details and expressions are important, such as people, pets, etc. If we have such a possibility, then the boss can be activated :D
@saurabhsswami4 күн бұрын
Hey benji love your work, but the bag doesn't seem quite right on the rock, the lighting seems a little off
@TheFutureThinker4 күн бұрын
@@saurabhsswami yes IC light need to adjust, the CFG, and the Blur after Big Lama. Forgot the mention. thanks.
@oskar42394 күн бұрын
Seems the end image will be better to be made using photoshop?
@crazyleafdesignweb4 күн бұрын
I think depends how many times you have and how many images need to work on. If this Comfyui can be automated then it will be difference
@TheFutureThinker4 күн бұрын
@@crazyleafdesignweb you got the point, since diffusion model on the market. I don't want to hire a Photoshop guy just sit here everyday for only PS , and theres over hundred images waiting on the line.
@insurancecasino57903 күн бұрын
It will be awhile to get PS's custom options. But it's coming. .
@crazyleafdesignweb4 күн бұрын
How easy now for product images today. Before in web design, we have to do lots of work for 1 task.
@Martin-bx1et4 күн бұрын
Many of Amazon images are on white backgrounds because Amazon actually *_demands_* them for the main product image.
@TheFutureThinker4 күн бұрын
The first images, yes. Even Google Shopping. Product Detail Page Optimization, for the other images. I did those everyday last few years.
@crazyleafdesignweb4 күн бұрын
@@TheFutureThinker08:20 He might just saw one part and quote, haven't listened the whole content of it.
@kalakala48034 күн бұрын
You should mak a web app for this. LOL
@TheFutureThinker4 күн бұрын
Should be , yup. 🤫
@vivekmalam63894 күн бұрын
am First @Benji, Great work
@Balidor4 күн бұрын
LLMoconception... AI genberated videos for AI related content.
@petEdit-h9l4 күн бұрын
Does mimic motion work with cartoon character
@TheFutureThinker3 күн бұрын
Try it before with cartoon , yup, it works
@dariocardajoli68314 күн бұрын
Fluxgym is underrated, only thing that confused me at first was how in the instructions it said to diwnload some flux-dev.sft file instead of the isual safetensor extension .. i simply copied my already existing safetensor in the fluxgym model unet folder and renamed it to have a sft extendion , did the same with the vao and it worked , any thoughts?
@TheFutureThinker4 күн бұрын
Sft, it is short terms for safetensors. Both works. I have my VAE named .sft also.
@dariocardajoli68314 күн бұрын
@@TheFutureThinker thank you , I was thinking it could stand for supervised fine tuning or something 😅 also another "rule" I broke was using the fp8 T5 text encoder during training by renaming it to fp16 and the scripts seem to correctly recognize it as fp8! Got any experience with using fp8 over fp16?
@kurocastle83465 күн бұрын
It also needs a fair amount of ram, I tried running it with 16 gb ram and it almost crashed my PC, no issues after upgrading to 32 gb
@TheFutureThinker4 күн бұрын
Yes, this AI just mix out all resources in a system.
@LEONARDO-z-b2o5 күн бұрын
Error occurred when executing CogVideoSampler: 'list' object has no attribute 'shape' Any suggestions?!
@TheFutureThinker4 күн бұрын
This happen to me before, I added a image resize node after load image. The comfy can run the process. Thats how I solve it in my system.
@lom19105 күн бұрын
How to run it on already existing flux model, not fp16?
@TheFutureThinker4 күн бұрын
They are made for fp16 to train. As it mentioned. I haven't try gguf or other version.
@lom19104 күн бұрын
@@TheFutureThinker I have 8GB VRAM and 16 RAM, but I instantly get out of memory with that default models. 😕 Pinokio either crashes completely or I get an error in the terminal
@TheFutureThinker4 күн бұрын
Not enough Vram , yes, it will not perform. And the trainer min. use 12gb vram.
@parthwagh36075 күн бұрын
can we use these controlnet for flux schnell
@johntnguyen19765 күн бұрын
Looks like the King's "happy moments" are more about his wife's bosom than her belly. 🤣 But seriously...thanks for the demo...convinced me to give Kling another try...bought the pro account...and have been more productive today with more usable Ai videos than Runway or Luma, for sure!
@TheFutureThinker5 күн бұрын
😂😂😂 yup we laughed in Discord
@RaulRodriguez-hk4xr5 күн бұрын
my question is this have to be install in local drive because for what i see its very heavy to storage all this model in a local drive ?
@TheFutureThinker5 күн бұрын
Yes , install locally. Its large file, so it takes time to download
@RaulRodriguez-hk4xr4 күн бұрын
@@TheFutureThinker how do you manage with so many model them, do you recommend any other way to install this ?
@TheFutureThinker4 күн бұрын
@@RaulRodriguez-hk4xr er... Woah.. that is very very tough. I brought a 4TB hard disk only running AI stuff.
@selaist15595 күн бұрын
after trainnig i dont see a my new lora in the lora's folder, that's strange
@mpprof97696 күн бұрын
"Generation failed, try again with a different prompt". Keep trying, getting annoying.
@TheFutureThinker6 күн бұрын
Looks like that is censored stuff you try to gen? If it show such message.🤫
@B4SICAI6 күн бұрын
Nice and thanks for featuring a couple of my pics.
@TheFutureThinker6 күн бұрын
Omg, you change you avatar pic again 😂😂😂
@B4SICAI6 күн бұрын
@@TheFutureThinker Diff account
@SeanieinLombok6 күн бұрын
@@TheFutureThinker still have my other account ;)
@TheFutureThinker6 күн бұрын
Ok haha yup i like this workout look.
@chriszodiak6 күн бұрын
Thanks! I was skeptical about this feature but you convinced me. Can't wait for paint brush in 1.5 !
@TheFutureThinker6 күн бұрын
Same here, I was skeptical before testing it, wonder if that will be the same as Runway Gen2 or better.
@kalakala48036 күн бұрын
With DiT AI Model, using Motion Brush is getting better result than the old video model in Gen-2. At least the motion can become more consistency without flickering.
@TheFutureThinker6 күн бұрын
Yes , i was thinking if that is the same motion brush or not. Hehe