@@FiveBelowFiveUK can i ask you kindly to give me an idea where to look if i have already trained lora - its working realy well and i want to train it futher ? for example i want take epoh 25 and train next 25 times on diffrent settings ? is it possible or all i can do on civic ai is taking data set and train it again ? i would like to improve few models instead making many.
@sinayagubi880525 күн бұрын
I hit that bell, open-pose guy.
@FiveBelowFiveUK24 күн бұрын
big love !
@Burc-p3t24 күн бұрын
bro your videos are so informtive thank you for all your efforts
@FiveBelowFiveUK24 күн бұрын
My pleasure!
@SouthbayCreations25 күн бұрын
Thank you for the video. Great explanation!
@FiveBelowFiveUK24 күн бұрын
You are welcome!
@Huang-uj9rt24 күн бұрын
Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.
@DragonEspral24 күн бұрын
Hello, I really enjoyed following the versions of Flux Foda Pack I even created a video summarizing it up to version 10, I'll release it today, I hope you don't mind...
@FiveBelowFiveUK24 күн бұрын
please do and i'd love to see it, feel free :)
@A.polon.i.a24 күн бұрын
Firstly, I love your videos, even though 75% of it is over my technical head, I still learn a great deal from your great explanation & presentation. Followed you on Civit for quite a while, and just joined your discord too. I, like a lot of others I suspect, am cocked and loaded, ready to unleash a Flux LoRA training frenzy, just waiting for a solid set of parameters to work with, and yourself & Kappa have kindly accelerated that avenue. One question, the same question it's been for my entire AI training life really?......correct & efficient dataset captioning, for Flux
@FiveBelowFiveUK24 күн бұрын
You might find my earlier video on this useful: kzbin.info/www/bejne/l5mUfIWqn6l5oZY there will be more in future :) as we learn more tips specific to flux, for now most SDXL logic for curating datasets applies - but it learns better so you need less samples :)
@PendekarHarimau15 күн бұрын
As a beginner civitai user, if I want to train my face, I just need to select character for LoRa training? 19:39
@FiveBelowFiveUK13 күн бұрын
yes
@Fiedroz24 күн бұрын
Does the caption lenght of how it should look like matter? Can I use WD14 for captions or Blip would work better?
@FiveBelowFiveUK24 күн бұрын
we can use longer/more detailed prompts in inference but unsure how that impacts datasets, with civitAI you can use the "append" to add captions, I cannot say which system they used. My strategy is exposed in the Thorra Anime model, where i use a caption root system to quickly write it all out, then append with vision/llm/blip or whichever is working best at the time.
@vindyyt19 күн бұрын
great video, I'm only not sure why my trained lora needs like 20mins to load before generating each image in ForgeUI...
@FiveBelowFiveUK19 күн бұрын
I have no idea, have not used Forge for a long time.
@robertaopd218225 күн бұрын
Hey can i ask anybody .what is if comfyui do only 11-16 insted of 30 i have in setup. Is this low grafic problem or?tnx
@FiveBelowFiveUK24 күн бұрын
what do you mean by 11-16 ?
@robertaopd218224 күн бұрын
@@FiveBelowFiveUK i use pinokio and when i generate 30staps pictures .some of generation dont rich 30staps but only 11or 15or 19 and go on next generarion... So is this for pinokio app normal or is my 3080rtx weak to melt with flux de
@trsd864020 күн бұрын
I’m sure it’s a great video, but as a non civitai user it’s not easy at all.
@FiveBelowFiveUK19 күн бұрын
we try to provide ways for people to train without local GPU, these are the easiest compared to local training or running cloud compute ;) is what we mean here - there will be more online trainers - and other guides like this one.