"Golden Son" Mochi1-Preview
2:24
21 күн бұрын
Mochi Outputs Demo V7~(shuffled)
4:05
Shuffle Video Studio - Example
3:49
21 күн бұрын
Introducing: Aiarty Image Enhancer
16:16
Пікірлер
@Afr0man4peace
@Afr0man4peace 3 сағат бұрын
Hi thanks for all your work. I will test it today. Will leave some review videos on civitai when I get it to work.
@Shaolinfool_animation
@Shaolinfool_animation 3 сағат бұрын
We need more people like you in the world.
@Andro-Meta
@Andro-Meta 5 сағат бұрын
Looking forward to seeing your rest results!
@VFXShawn
@VFXShawn 5 сағат бұрын
This is fast, but we need to be able to control the strength of the latents and images.
@ApexArtistX
@ApexArtistX 5 сағат бұрын
its great and all but is it 8 vram crash proof
@TeddyLeppard
@TeddyLeppard 6 сағат бұрын
Another 8-12 months and these obscure interfaces will start to go away in favor of far more intuitive controls and production friendly ways to create video.
@agente_ai
@agente_ai 6 сағат бұрын
THIS IS NOT HUGE....I tested half the day and all night....and no way this even comes close to what commercial platforms are able to offer....the only thing they do better is faster renders but the quality and prompt adherence is pure shite! Not to mention it is far too early to be claiming something that has unfriendly UX is going to be huge or is the next best thing.... Disingenuous, at best; dishonest, at worst.
@imoon3d
@imoon3d 6 сағат бұрын
Yes all local video creation models are just for fun testing...
@Andro-Meta
@Andro-Meta 5 сағат бұрын
He made it pretty clear that for local video ai creation, this is huge. And this is.
@BlackMixture
@BlackMixture 7 сағат бұрын
This is HUGE! Thanks for being a hero in the community and showing us how powerful local video gen could be!
@AiVisualFind
@AiVisualFind 12 сағат бұрын
Anyone got their vid2vid in comfy working?
@goodie2shoes
@goodie2shoes 15 сағат бұрын
These companies need to roll out this stuff more gradually-these constant dopamine spikes are wrecking my sleep! Oh, and don't think we forgot-you still owe us that deep dive into flux tools. 😉
@AgustinCaniglia1992
@AgustinCaniglia1992 16 сағат бұрын
❤thank You sir
@krakenunbound
@krakenunbound 17 сағат бұрын
I had to update from the batch file in the updates folder (comfyui) and then the custom nodes finally installed correctly. Simply using the built in update everything button in manager did not work.
@begineditingfilm
@begineditingfilm 9 сағат бұрын
I had the same problem. I try the same thing you did and it worked. Thanks
@joshuadelorimier1619
@joshuadelorimier1619 17 сағат бұрын
I think the model is trained at 25 also I've been getting 5 seconds no problem however have to do 3 seconds whenever I change the prompt
@Kaoru8168
@Kaoru8168 17 сағат бұрын
i was never interested in local models until this came out im going to find the best settings and squeeze every last thing of this goldmine
@TomHimanen
@TomHimanen 17 сағат бұрын
Could you create a video that explains all the basic parameters and what they affect? Examples would be great because written articles often don't have any and trying them with slow GPU is just very slow way to learn. I have learned by trial and error what for example CFG does but still don't fully understand its interactions with other node parameters.
@dariushhafezalkotob
@dariushhafezalkotob Күн бұрын
Hi , is this mocap that you're using realtime?
@therookiesplaybook
@therookiesplaybook Күн бұрын
Your workflows don't load
@WhiteError37
@WhiteError37 Күн бұрын
Excellent video, I'm completely new but people like you make it trivial get going. I just put both checkpoints in my stable diffusion folder so Data/Models/StableDiffusion/<checkpoints_here> and it worked fine. I'm running on a laptop with a 3070 integrated GPU and it works pretty well. Anyway thanks for the tutorial
@ggarra13
@ggarra13 Күн бұрын
Unfortunately, they are also limited to non-commercial use as the unofficial controlnets and loras. We need someone to port them to flux-schnell legally.
@sven1858
@sven1858 Күн бұрын
Thanks for a well explained but short video with no fluff. Liked & subscribed. Plus thanks for sharing your workflows.
@maxmad62tube
@maxmad62tube Күн бұрын
Thanks for your video and your tricks !
@pixelcounter506
@pixelcounter506 Күн бұрын
Cool ... thank you very much for your fast approach to the new flux tools! It's getting difficult to keep track of all these new developments. And harddisks are always (!) way too small!^^
@noonesbiznass5389
@noonesbiznass5389 2 күн бұрын
Thanks for the video - helpful!
@huangkaitong7916
@huangkaitong7916 2 күн бұрын
thank
@haliskurguyan
@haliskurguyan 2 күн бұрын
@stonythewoke9921
@stonythewoke9921 3 күн бұрын
Which of the 726 links in the video description actually lead to the workflow from the video?
@wolpumba4099
@wolpumba4099 4 күн бұрын
*Exploring FLUX1 Passive Detailers for Enhanced AI Image Generation* * *0:03** Introduction to Passive Detailers:* Passive Detailers are Lora (latent text-to-image representations) models that refine generated images without requiring specific prompt triggers. They subtly influence the style and details of the output. * *0:07** Thora Anime Model Example:* The video demonstrates the impact of four Passive Detailers (Not The True World, Abstract Chaos, Electron Microscopy, Genesis Flux) on a custom Thora anime model. * *0:43** Detailer Effects:* Each detailer has a unique effect: * Not The True World enhances sharpness and detail. * Abstract Chaos creates a more natural, albeit sometimes overexposed, lighting effect. * Electron Microscopy aims for a soft-focus, macro photography style. * Genesis Flux introduces subtle fractalization, especially noticeable in neon lighting and glows. * *2:56** Consistent Character with Variations:* Detailers modify the image's style and details without significantly altering the core subject (e.g., the Thora character). * *3:45** Passive vs. Active Detailers:* Passive Detailers operate without prompt triggers, but they can be made "active" by including their trigger word in the prompt, resulting in a more pronounced effect. * *14:50** Detailers Without Character Lora:* Passive Detailers can be used even without a character Lora, directly influencing the base FLUX1 model's output. * *15:50** Stronger Effects Without Character Lora:* When used without a character Lora, the effects of the detailers are more pronounced as they are not balanced against another Lora. * *18:10** Organic and Fractal Detailers:* Electron Microscopy tends to introduce more organic or biological details, while Genesis Flux emphasizes fractal patterns. * *22:02** Recommendation:* The video encourages viewers to experiment with the four detailers to discover their unique effects and how they can be applied to enhance AI image generation. * *22:18** Accessing Detailers:* Links to the detailers are provided in the video description and in an article on the FiveBelowFiveUK CivitAI profile. I used gemini-1.5-pro-exp-0827 on rocketrecap dot com to summarize the transcript. Cost (if I didn't use the free tier): $0.03 Input tokens: 21454 Output tokens: 465
@wolpumba4099
@wolpumba4099 4 күн бұрын
*Tips and Tricks for Streamlining Flux Lora Model Testing in ComfyUI* * *0:03** Introduction:* The video focuses on speeding up the process of testing LoRA models trained using various methods. * *0:27** Prompt Automation:* The creator recommends using a "caption collector" tool to create a prompt list from the training dataset. This allows for automated testing with prompts relevant to the training data using custom nodes like "zenai prompt V2" in ComfyUI. * *2:09** Organizing Intermediates:* Create a folder (starting with "A" for easy access) to store intermediate LoRA models (e.g., Epoch 3 through 10). * *2:40** Workflow Overview:* The demonstrated workflow uses the same seed, prompt, base model, and settings for each run, only changing the intermediate LoRA model (Epoch) to ensure a fair comparison. * *3:32** Default Settings:* The example uses Flux Schnell, T5 clip, 12 steps, Euler a sampler, and a CFG of 3.5. * *4:04** Image Comparison:* The workflow utilizes "CR image compare" nodes to create comparison charts of different Epoch outputs, visually highlighting the differences between them. * *5:37** Accuracy vs. Aesthetics:* The creator emphasizes the choice between prioritizing accuracy to the training dataset or the overall aesthetic appeal of the generated images, noting that the best-looking image may not always be the most accurate. * *6:16** Epoch Selection:* The creator suggests that Epochs 1 and 2 often have a strong bias towards the base model. They generally recommend Epochs 4-7 for good image quality and 8-9 for accuracy to the training data. * *7:00** Utilizing Different Clip Encoders:* The workflow can be modified to use the full T5 clip encoder for potentially better results, although it can increase processing time. * *17:52** Verifying LoRA File Integrity:* It's crucial to ensure that all downloaded intermediate LoRA files are the same size to avoid errors caused by incomplete downloads. * *18:37** Sequential Queuing:* To ensure a strictly sequential processing order, it's recommended to wait for the current queue to complete before adding more jobs from different tabs. * *20:47** Early Epoch Considerations:* Early Epochs might produce visually appealing results due to a stronger influence of the base model, but they might not accurately reflect the learned features of the LoRA. * *24:33** Identifying Outliers:* Be aware of outlier images that deviate significantly from the expected output. These can be caused by various factors and might not represent the overall performance of a particular Epoch. * *34:50** Raw Shacks Clip Exploit:* The creator briefly mentions a technique they call the "Raw Shacks Clip Exploit" or "Vision Game of Telephone" which involves using abstract images and intentionally misleading captions to create unique artistic styles. * *40:54** Maintaining Consistent Comparisons:* Avoid changing generation settings (CFG, steps, samplers, etc.) between Epoch comparisons to ensure a fair assessment of the LoRA's impact. I used gemini-1.5-pro-exp-0827 on rocketrecap dot com to summarize the transcript. Cost (if I didn't use the free tier): $0.03 Input tokens: 24361 Output tokens: 670
@user-wr1fx2wf8m
@user-wr1fx2wf8m 4 күн бұрын
oooooooooooooooooooooooooo very nice
@SouthbayCreations
@SouthbayCreations 6 күн бұрын
Great video with a ton of great information! Thank you for sharing! Jason
@jakehunter1831
@jakehunter1831 6 күн бұрын
I feel like I'm graduating from ChatGPT to Claude for code writing. I've been working on reverse engineering Character AI style websites. I've rented an A4000, running Koboldcpp API , and RVC-TTS-API. I've deployed a web server on a different host routed it all through my domain. I've got the landing page with a grid of characters that I can click through into a chat. Voice chat and TTS are toggle-able. Thanks Claude!
@FrankHouston-v5e
@FrankHouston-v5e 6 күн бұрын
Do a video on 40K.groxy image generation 😗..
@missmango5259
@missmango5259 6 күн бұрын
Thank you so much for the tutorial! Unfortunately it not works for me. I get this error message: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. accelerate 1.1.1 requires huggingface-hub>=0.21.0, but you have huggingface-hub 0.20.3 which is incompatible. diffusers 0.31.0 requires huggingface-hub>=0.23.2, but you have huggingface-hub 0.20.3 which is incompatible. grpcio-status 1.62.3 requires protobuf>=4.21.6, but you have protobuf 3.20.3 which is incompatible. transformers 4.46.2 requires huggingface-hub<1.0,>=0.23.2, but you have huggingface-hub 0.20.3 which is incompatible. Successfully installed PyWavelets-1.7.0 dadaptation-3.1 huggingface-hub-0.20.3 invisible-watermark-0.2.0 lion-pytorch-0.1.2 open-clip-torch-2.20.0 prodigyopt-1.0 protobuf-3.20.3 WARNING: The following packages were previously imported in this runtime: [google,huggingface_hub] You must restart the runtime in order to use newly installed versions. I tried to restart the session, reopen it completely and even tried to copy it to my google drive. When I searched the problem in google I found a thread that said I need to update my huggingface hub, but I don't even know what that is. Could someone help me with this?
@krakenunbound
@krakenunbound 7 күн бұрын
Excellent video as always! Cheers
@AbdullahMohamed-p3i
@AbdullahMohamed-p3i 7 күн бұрын
Im very new to all this stuff but honestly I watch all your videos from start to finish. You're passionate, easy to digest and I want to learn more
@FiveBelowFiveUK
@FiveBelowFiveUK 7 күн бұрын
Welcome aboard!
@johndebattista-q3e
@johndebattista-q3e 7 күн бұрын
That's why I like you because you give things to help And I learn a lot from you and you always surprised me
@FiveBelowFiveUK
@FiveBelowFiveUK 7 күн бұрын
thanks so much - I like to keep it interesting for everyone, and i have so many stories to tell !
@user-wr1fx2wf8m
@user-wr1fx2wf8m 8 күн бұрын
Thank you boss
@user-wr1fx2wf8m
@user-wr1fx2wf8m 8 күн бұрын
Incredible dude
@FiveBelowFiveUK
@FiveBelowFiveUK 7 күн бұрын
🙌
@zzzzzzz8473
@zzzzzzz8473 8 күн бұрын
this is so freaking cool , a local setup to recursively refine and critique such generative creations would be ideal . the back and forth of loading this up into the system if made any manual changes seems to be the largest bottleneck . awesome works !
@manabruti
@manabruti 9 күн бұрын
You are automating the automation ! Thank you for your efforts. 🙏
@FiveBelowFiveUK
@FiveBelowFiveUK 7 күн бұрын
Thank you too!
@Kerphlam
@Kerphlam 9 күн бұрын
Did you record this while in the bath? You videos always have the most interesting audio.
@FiveBelowFiveUK
@FiveBelowFiveUK 7 күн бұрын
feel my pain from buying a new microphone :)
@Lexie-bq1kk
@Lexie-bq1kk 10 күн бұрын
I don't mean to complain because you're providing free info, but that filter on the audio is very distracting. would prefer no frills and just hear you clearly as you are. filters might still be cool but the one you are using now just changes the EQ too much like the low end drops out completely it sounds like the speaker cable is halfway unplugged
@FiveBelowFiveUK
@FiveBelowFiveUK 7 күн бұрын
valid criticism is valid - i think it was an adobe audio preset going funky after changing to a new mic, combined with the new mic also having crazy cut off. Hoping it's solved in newer videos - thanks for the feedback - it lets me know to fix it !
@animaticmediaUSA
@animaticmediaUSA 10 күн бұрын
Is this an old workflow? I get a different LivePortrait Process node that is asking for 'crop info' & 'mask' any guidance would be appreciated
@FiveBelowFiveUK
@FiveBelowFiveUK 7 күн бұрын
yes its a few months old now, I'll make a note to go back and consider an update to the Loki Pack !
@Artificialintelligenceo
@Artificialintelligenceo 12 күн бұрын
Does anybody have the same problem as me? Tried everything also updated the Cuda and the torch but I still cant make the MuchiWrapper to work. Im on a 4090. Please if you have any idea, the LayerStyle and Wrapper custom nodes are installed, but say they are not in ComfyUI. I have now spent 5 hours. I hope someone can help me out, that would be a great help. Thanks in advance.
@FiveBelowFiveUK
@FiveBelowFiveUK 7 күн бұрын
This is really strange because i was using this workflow last night and i also use a 4090. I'll take a look in the next update for this pack
@luigideluca5303
@luigideluca5303 12 күн бұрын
hi i have this error in comfyui: expected mat1 and mat2 to have the same dtype, but got: struct c10::Half != float How can I solve it? Thank you
@FiveBelowFiveUK
@FiveBelowFiveUK 7 күн бұрын
This pack is due for an update, it possible the nodes need to be recreated, you can try that immediately, right click on the node the option should be there to "fix node (recreate)" sometimes the code changed after the workflow came out, and so it might be missing code - otherwise look out for the next update to this pack :) and thanks for the comment, it's how i know to update workflows!
@WhySoBroke
@WhySoBroke 13 күн бұрын
Powerful info!!
@WhySoBroke
@WhySoBroke 13 күн бұрын
This is superb amigo!! I am certainly going to give this a try!
@FiveBelowFiveUK
@FiveBelowFiveUK 12 күн бұрын
Go for it!
@ertbul
@ertbul 13 күн бұрын
Thank you
@FiveBelowFiveUK
@FiveBelowFiveUK 12 күн бұрын
watch out for part 3 !
@SouthbayCreations
@SouthbayCreations 13 күн бұрын
Thanks for sharing! I just got into messing around with Cursor last month and have been really enjoying it. Looking forward to more videos! Jason
@FiveBelowFiveUK
@FiveBelowFiveUK 12 күн бұрын
Great to hear! i'll write that down :)
@christofferbersau6929
@christofferbersau6929 13 күн бұрын
great stuff!
@FiveBelowFiveUK
@FiveBelowFiveUK 13 күн бұрын
Glad you enjoyed :)