LoRA Training in FLUX: How to Use FluxGym Effectively

  Рет қаралды 7,295

Vladimir Chopine [GeekatPlay]

Vladimir Chopine [GeekatPlay]

Күн бұрын

Пікірлер: 43
@greathawken7579
@greathawken7579 Ай бұрын
Hey Vladimir, thanks for the tip about Sampler: Euler and Scheduler: Normal! It made a huge difference for my character, Loras. Really appreciate it! Great Video Thank You.
@Geekatplay
@Geekatplay Ай бұрын
Hey, thank you so much for the kind words! 😊 I'm glad the tip about Sampler: Euler and Scheduler: Normal worked well for your character, Loras. It’s always great to hear when these small tweaks make a big difference. Appreciate your support and feedback! 🎉👍
@Bioshock_84
@Bioshock_84 25 күн бұрын
I was waiting for the explanation of the advanced options in fluxgym. I'll have to continue investigating on my own. Good video.
@Geekatplay
@Geekatplay 25 күн бұрын
Thank you for the feedback! 😊 I understand the curiosity about the advanced options in FluxGym. I’ll make sure to cover those in more detail in a future video to help clarify everything. In the meantime, good luck with your investigations-FluxGym has so much potential to explore! 🚀✨
@IssyOakes
@IssyOakes 2 күн бұрын
Thank you for the video. I have followed all of the steps up to the training. It tells me the training is complete, but I do not have any images in the output folder. No Lora either. I only have the dataset/ReadMe/sample_promts/train. I'm not sure why? Can you help please? Many thanks
@Geekatplay
@Geekatplay 2 күн бұрын
It sounds like the training process completed but didn’t generate the expected LoRA file. A few things to check: Check Training Logs - Look at the console output or log files to see if there were any errors or warnings during training. Verify Output Paths - Make sure the output directory is correctly set in the training parameters. The trained LoRA should be in models/lora/ or a similar folder. Check Disk Space - If the drive is full, training might not save the outputs. Ensure Proper Training Steps - If the training steps were set too low, it might not have trained properly. Try increasing steps and epochs. Run with Admin Privileges - Sometimes, folder permissions prevent writing the LoRA file. If none of these help, try re-running training with fewer images and check if it produces any results. Let me know what the logs say!
@syedhamza7207
@syedhamza7207 20 күн бұрын
You definitely deserve a subscribe
@Geekatplay
@Geekatplay 19 күн бұрын
Thanks for the support! 🙏✨
@mrrubel8841
@mrrubel8841 21 күн бұрын
Please zoom the part you are working, as it was very difficult to see which part you were working. Your screen might very big.
@Geekatplay
@Geekatplay 19 күн бұрын
Thank you so much for your feedback! 😊 I’ll make sure to zoom in on the parts I’m working on in future videos so it’s easier to follow. You’re absolutely right-my screen setup might be making things harder to see on smaller screens. I’ll work on improving the clarity to make the tutorials more accessible. Thanks again for letting me know! 🙌
@sr.modanez
@sr.modanez Ай бұрын
top top top, obrigado pelo video
@Geekatplay
@Geekatplay Ай бұрын
Thank you!
@Samt2b
@Samt2b Ай бұрын
Can you please make a video of multiple consistency
@Geekatplay
@Geekatplay Ай бұрын
Thank you for the suggestion! 😊 A video about multiple consistency sounds like a great idea. I’ll add it to the list and make sure to cover it with clear examples. Stay tuned! 👍🎥
@quercus3290
@quercus3290 Ай бұрын
triggers words are more or less optional to be honest, they may even degrade your lora if bias is associated with the token used.
@jefitedemetrio
@jefitedemetrio 24 күн бұрын
i'm try , but the result is bas, traning with 25 imagens,512x512, white background, standart config, 512x512, but my face dont reproduce on comfyui, why
@Geekatplay
@Geekatplay 24 күн бұрын
The issue might be caused by insufficient training data, resolution, or incorrect configurations. Increase the dataset size to at least 100-200 high-quality images with varied angles, lighting, and expressions. Use higher-resolution images, like 768x768 or 1024x1024, if possible, or crop the images closer to the face while maintaining quality. Avoid plain white backgrounds; instead, use neutral or varied backgrounds, or blur them during preprocessing. Adjust the training configuration by lowering the learning rate to 1e-5 and increasing the training steps or epochs, starting with 3-5 epochs. Use a cosine or linear learning rate scheduler for smoother convergence. Include regularization images from a pretrained model to improve generalization and avoid overfitting. Save intermediate checkpoints during training to evaluate progress. In ComfyUI, verify that the correct workflow is set for LoRA training, and check that the LoRA is applied correctly during generation. If issues persist, ensure proper preprocessing of your dataset and review training logs for potential errors.
@jefitedemetrio
@jefitedemetrio 24 күн бұрын
@@Geekatplay 😍 tankyou very much, you are the best!! i'm Brazilian, my english is bad 🤣
@ze7189
@ze7189 29 күн бұрын
Will the program automatically download the model? I didn't see the model download step in the video.
@Geekatplay
@Geekatplay 28 күн бұрын
i downloaded model in previous videos, please check one video before that.
@foopinhoff
@foopinhoff Ай бұрын
I'm getting "WARNING The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable." on launch. any tips?
@Geekatplay
@Geekatplay Ай бұрын
It looks like the issue is related to the bitsandbytes library not detecting GPU support, which is essential for efficient 8-bit optimization during LoRA training. Here are some tips to resolve this issue: Ensure CUDA Compatibility Verify that your GPU supports CUDA and that the correct CUDA version is installed. Use nvcc --version or check your GPU specs to confirm compatibility. Update bitsandbytes Run the following command to reinstall or update bitsandbytes: pip install --upgrade bitsandbytes Install the Correct Version If you are using a specific CUDA version (e.g., CUDA 11.x), you may need to install the appropriate bitsandbytes version: pip install bitsandbytes-cuda11x Check PyTorch Installation Make sure PyTorch is installed with GPU support. Install it using: pip install torch torchvision torchaudio --index-url download.pytorch.org/whl/cu11x Replace cu11x with your CUDA version (e.g., cu117 for CUDA 11.7). Verify Installation Test if bitsandbytes recognizes the GPU: import bitsandbytes as bnb print(bnb.__version__) Fallback Solution If you can't resolve the GPU issue, you can still use CPU-based training, but it will be slower: export BNB_CUDA_QUIET=1 If these steps don’t resolve the issue, feel free to share additional details about your setup (OS, GPU, versions of CUDA, PyTorch, etc.), and I’ll help you troubleshoot further! 😊
@WhySoBroke
@WhySoBroke Ай бұрын
@@Geekatplaythanks for taking the time to trobleshoot!! You are legend!
@K-A_Z_A-K_S_URALA
@K-A_Z_A-K_S_URALA 24 күн бұрын
приветствую уважаемый! Вопрос такой: я когда раньше лору тренировал на Kohya_ss-GUI-LoRA-Portable, я называл папку, где фотографии размещались, 10_итд, а в настройках эпох 1итд, а здеся сколько и чего указывать, помоги разобраться, я как понял, Repeat trains per image 10... это так же, как я папку указывал с фото... а Max Train Epochs 1 ставить - это как в настройках 1 стояло у меня? Верно...
@Geekatplay
@Geekatplay 24 күн бұрын
Приветствую! Да, ты правильно понял. В ComfyUI настройка "Repeat trains per image 10" аналогична тому, как ты раньше называл папку 10_итд в Kohya_ss, то есть это количество повторений тренировок на каждую картинку. А "Max Train Epochs 1" соответствует тому, как ты ставил 1 в настройках эпох ранее. Если ты привык к такой логике, то можешь смело использовать эти настройки, они работают примерно так же. Главное - следи за балансом, чтобы модель не переобучалась. Удачи с тренировкой!
@K-A_Z_A-K_S_URALA
@K-A_Z_A-K_S_URALA 24 күн бұрын
@@Geekatplay благодарю за ответ друг!
@jacky_west
@jacky_west 23 күн бұрын
I don’t have the same comfy ui workflow
@Geekatplay
@Geekatplay 19 күн бұрын
it may open in secondary tab, check on your browser
@andyy79
@andyy79 28 күн бұрын
Thank you Vladimir for making this tutorial, this is fantastic for local Loras and works like a charm..:D I would like to use several Trigger words to trigger different details on an object. Is that possible in Fluxgym? In addition, I created some Car Loras and the overall look is amazing, but, how do I get the details cleaner? For example a Car Brand logo on the rear of the car, or clean and accurate rims? The trained Car looks great in proportion and surface, but Logos/number plates for eg. are unreadable. Does anybody has experience with this / can give me some tips?
@Geekatplay
@Geekatplay 27 күн бұрын
Thank you for the kind words! 😊 I'm glad the tutorial was helpful and that FluxGym is working well for your LoRA training! To address your questions: Using Multiple Trigger Words: Yes, you can use multiple trigger words to control different details of an object in FluxGym. When training your LoRA, you can associate specific trigger words with different features during the dataset preparation stage. Just ensure your dataset is annotated clearly, and the corresponding prompts during training reflect those distinctions. Improving Logo and Detail Quality: Blurry or unreadable details like logos and number plates often result from insufficient high-quality samples in the dataset. Here are some tips to improve: Dataset Quality: Include more high-resolution images of the details you're focusing on, such as logos, rims, and number plates. Data Augmentation: Use zoomed-in or cropped images of those specific details alongside full car images to reinforce their importance during training. Training Parameters: Try increasing the number of training steps or adjusting learning rates slightly to fine-tune for detail retention. Trigger Words: Use specific words for details like "logo," "number plate," or "rim" in your training prompts and during inference. Post-Training Refinement: After training, you can try using inpainting or ControlNet with Stable Diffusion to refine specific areas (e.g., logos or rims) to achieve cleaner results. I'm sure with these tweaks, you'll get even better outcomes for your car LoRAs! 🚗✨ If you or anyone else has additional tips or questions, feel free to share! 😊
@andyy79
@andyy79 26 күн бұрын
@@Geekatplay Thank you Vladimir:)
@INVICTUSSOLIS
@INVICTUSSOLIS 29 күн бұрын
Does this work with Mac?
@Geekatplay
@Geekatplay 29 күн бұрын
Yes, training LoRAs for Flux using ComfyUI works on a Mac, as ComfyUI is compatible with Apple Silicon machines and can run Flux models, allowing you to train custom LoRAs on your Mac with full functionality; you can use Flux LoRAs within the ComfyUI interface on your Mac.
@INVICTUSSOLIS
@INVICTUSSOLIS 28 күн бұрын
@@Geekatplay I use comfy regularly but i never really tried training with it. Thought the fluxgym thing doesnt work with Mac as I tried it one and didnt know what I was doing perhaps.
@JakubSK
@JakubSK 19 күн бұрын
@@Geekatplay Nope. Not training.
@kzzrinal4154
@kzzrinal4154 19 күн бұрын
Can a Nvidia RTX 4060 ti 8 vram graphics card do this?
@Geekatplay
@Geekatplay 19 күн бұрын
To effectively train a LoRA model using FLUX, you generally need a system with at least 12GB of GPU VRAM, a powerful NVIDIA GPU (ideally RTX 3000 or 4000 series), and a significant amount of system RAM (around 32GB recommended) to handle the model size and quantization process; depending on the specific FLUX model (like "flux1-dev" requiring more VRAM than "flux1-schnell") and the complexity of your LoRA, you may need even more resources.
@LouisGedo
@LouisGedo Ай бұрын
👋 hi
@Geekatplay
@Geekatplay Ай бұрын
Thank you!
@tonymontana6923
@tonymontana6923 2 күн бұрын
Лоры во flyxgym получаются не очень, лучше kohya ss, и если хотите больше схожести лица - лучше не использовать подписи вообще.
@Geekatplay
@Geekatplay 13 сағат бұрын
Спасибо за совет! Да, у FlyxGym лоры получаются не такими точными, согласен. Kohya SS действительно даёт более качественные результаты, особенно если важно сохранить детали лица. Насчёт подписей - хороший момент, иногда они могут мешать модели правильно схватывать особенности. Буду пробовать без них, спасибо за рекомендацию! 🚀
@ThePhoeniXeb
@ThePhoeniXeb 17 күн бұрын
getting : I'm getting "WARNING The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable." on launch. any tips? I have an AMD 7800 XT
@Geekatplay
@Geekatplay 17 күн бұрын
It sounds like the issue might be related to compatibility with your AMD GPU, as bitsandbytes is optimized for NVIDIA GPUs and doesn't natively support AMD hardware. Unfortunately, bitsandbytes relies on CUDA, which is NVIDIA-specific. For AMD GPUs, you might want to explore alternatives like ROCm (Radeon Open Compute) for deep learning tasks. Check if the software you’re using has AMD-compatible configurations or plugins. If not, you may need to disable the bitsandbytes dependency and switch to a CPU fallback or other optimizer.
Master LoRA with the FLUX Model in ComfyUI: Unlock Stunning AI Art Workflows!
20:15
Vladimir Chopine [GeekatPlay]
Рет қаралды 3,1 М.
Trainer for LoRA, Checkpoints, and Diffusion Models
18:25
Vladimir Chopine [GeekatPlay]
Рет қаралды 892
REAL or FAKE? #beatbox #tiktok
01:03
BeatboxJCOP
Рет қаралды 18 МЛН
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Сестра обхитрила!
00:17
Victoria Portfolio
Рет қаралды 958 М.
Best Faceswapper I've Seen. ACE++ in ComfyUI.
16:07
Sebastian Kamph
Рет қаралды 14 М.
AI Animation in ComfyUI with CogVideoX workflow
23:41
Vladimir Chopine [GeekatPlay]
Рет қаралды 3,9 М.
Method and Thoughts -  Miniature People LoRA for Flux - fluxgym
20:30
How to make 3D models in AI ComfyUI,  magic of Trellis
19:10
Vladimir Chopine [GeekatPlay]
Рет қаралды 7 М.
How to Install and Use NVIDIA Cosmo with ComfyUI | Step-by-Step Tutorial
15:56
Vladimir Chopine [GeekatPlay]
Рет қаралды 3,6 М.
Free FLUX LoRA Training | Easy Ai Influencer LoRA | FluxGym Tutorial
14:27
REAL or FAKE? #beatbox #tiktok
01:03
BeatboxJCOP
Рет қаралды 18 МЛН