How do you write prompts for FLUX? In natural language, tokens, or...?
@RASIhz2 ай бұрын
wow this video is amazing I didint know these terms before
@NextTechandAI2 ай бұрын
@RASIhz Thanks a lot, I‘m happy that you find the video useful.
@digitalspacestudio39562 ай бұрын
Wow! This is real magic! Thank you for explaining everything so easily!
@NextTechandAI2 ай бұрын
Thank you very much for your feedback! I'm happy you find the information useful!
@Marcus_Ramour2 ай бұрын
Great video and really finding your flux tutorials/explanations very useful. I find the way I was prompting in SDXL is working well in flux too. Natural language, starting with type of image & style, description of the subject then pose and location. Flux seems to get very close which then allows fine tuninng whereas SDXL I have to use a lot of control net/IPAdapter to get what I really want alongside the prompts.
@NextTechandAI2 ай бұрын
Thank you very much for you detailed feedback. Indeed Flux and SDXL are not far from each other and I'm not surprised that your approach works well.
@rodopil116113 күн бұрын
So so USEFUL and essential ! Merci beaucoup :)
@NextTechandAI13 күн бұрын
Thank you for your feedback, I'm glad the video was useful for you 😀
@evolv_852 ай бұрын
This is great, thanks. It has saved me some time playing around with the prompts and settings. I started to move away from brackets and toward natural language prompts with SDXL to make things more straight forward and got great results as long as I got the settings right. As soon as I set up flux, I went straight to natural language and got awesome results straight away. Particularly with the schnell model. I am not seeing a great difference between the standard clip encoder and the flux one.
@NextTechandAI2 ай бұрын
Thank you very much for your detailed feedback. When generating with same seed there is a noticeable difference between standard and flux text encoder, but you are right, the difference is not very big. Happy to read that you are using the SCHNELL model, too.
@evolv_852 ай бұрын
@@NextTechandAI Hi, no problem. It's great to share these things because it moves so fast. Today I've already found the FLUX NF4 version. It's half the size, twice as fast and results are good so far, not amazing but good enough.
@jayross661Ай бұрын
Great video and loved the explanations and walkthrough. Thank you!
@NextTechandAIАй бұрын
Thank you very much for your motivating feedback!
@lowrider64192 ай бұрын
My current walpaper is: Three anthropomorphic hares in red, blue and green clothes are pulling a wooden cart with large wooden wheels with huge single carrot, much larger than them. The action takes place in an autumn field with dried grass and small colourful meadow flowers growing along a dirt road. In the background, a dense forest can be seen in the distance.
@NextTechandAI2 ай бұрын
Great idea, thanks for sharing. Did a quick generation and both with Dev and Schnell it looks like a photo.
@kukipett2 ай бұрын
I have also made a lot of tests with flux and prompts and i've noticed that Flux is not suited for art but more for hyperrealistic photo like images. There is a way to make it follow more closely your prompts, i seen a guy who was making the model pass through a dynamicthresholdingfull node and then you can use a negative prompt and a a cfgguider to force inject a cfg that is normally set to 1 for flux. And it works, i can add negative prompts and get a far more accurate image, i was surprised to see that my prompts were much better followed.
@NextTechandAI2 ай бұрын
Thanks for the detailed description of your workflow. As mentioned in the video, I think SCHNELL is suitable for arts, maybe you have focused on DEV? Nevertheless, I tried out a workflow for negative prompts for this video. Unfortunately it works with DEV only, it makes the generation process very slow and has proven to be unreliable for me. Won’t your workflow be slowed down by negative prompts?
@kukipett2 ай бұрын
@@NextTechandAI Well i have only worked with dev for now. And about speed i've just made a test with the same setting on the normal generator and the special one. The normal takes 54 sec and the special 1 min 44 sec. i have to say that i have two loras loaded and a 3080ti 12 gb GPU. I use the fp8 dev and the t5xxl fp16
@NextTechandAI2 ай бұрын
@kukipett Then our experiences with DEV coincide. I've read that a negative prompt takes about twice as long because of the second pass; that would also fit. Thanks for sharing your numbers.
@evolv_852 ай бұрын
I'm using schnell and get amazing artwork. It's generating anything I tell it to so far.
@Beauty.and.FashionPhotographer2 ай бұрын
suggestion for a cool video : not many talk about the "ProMax model" diffusion_pytorch_model_promax.safetensors
@NextTechandAI2 ай бұрын
Thanks for the hint, but there are already videos about ProMax. Anyhow, I'll put it on my list.
@RodrigoAGJ14 күн бұрын
I’m really eager to try out this interesting workflow! Where can I find it?
@NextTechandAI14 күн бұрын
I'm glad the video is useful. Which workflow do you mean?
@HilfeАй бұрын
Krasser Akzent 😀👍🏼
@NextTechandAIАй бұрын
I'm glad you enjoyed the video.
@oldfeiwangАй бұрын
How to weight the prompt in FLUX, like sd1.5 (word:weight)? It seems do not work in that way.
@NextTechandAIАй бұрын
You have to use natural language and describe important items in more detail.
@AsyouwereАй бұрын
Nice video, suggestion; lower the music, the excessive ducking is distracting.
@NextTechandAIАй бұрын
Thanks for your feedback and suggestion.
@rijnhartman8549Ай бұрын
you should create a custom GPT in ChaGPT with this in the backend
@NextTechandAIАй бұрын
This is indeed an interesting idea.
@as-ng5ln15 күн бұрын
DEV has its own vae
@NextTechandAI15 күн бұрын
What do you mean? There is one VAE for Flux, but some checkpoints have it included directly.
@as-ng5ln15 күн бұрын
@NextTechandAI dev has a special vae that can be downloaded on huggingface, maybe that is why the images turned out that poorly
@NextTechandAI15 күн бұрын
@@as-ng5ln No, there is one VAE for Flux. This has absolutely nothing to do with the fact that Schnell follows prompts better than DEV. Try it yourself and generate the same image with both VAE files. By the way, you can try this with SD3.5 Large and Turbo, too.
@as-ng5ln15 күн бұрын
@@NextTechandAI I'm telling you... I have the two files "ae.safetensors" and "flux1DevVAE_safetensors.safetensors". ae comes from schnell, while the other one is from the dev directory
@NextTechandAI15 күн бұрын
@as-ng5ln Yes, and they have same effect on Flux image generation. As I said, try youself.
@ShakouTheWolf2 ай бұрын
Hello, flux seems to be a model for realism correct? But how much fantasy stuff are we able to render with it? For example i can get DallE 3 to render cartoony inflated tigers as seen as in tom and jerry. Can Flux do this too?
@NextTechandAI2 ай бұрын
With SCHNELL it's not a problem to render fantasy stuff, see the dragons in my vid. Although you can create very realistic images with DEV, fantasy images are possible, too. It just does not follow your prompts that much, but you can use lots of Loras to create certain styles.
@ShakouTheWolf2 ай бұрын
Interesting! I would have to give it a try. But not sure if it can exceed the quality i expect. Since i have been using Dalle3 for that. I could show you examples through DM's or something @NextTechandAI
@CryptoPRO-fo5wi2 ай бұрын
You can do simple inpainting with image-to-image in ComfyUI. But how are you going to use Flux NF4?
@NextTechandAI2 ай бұрын
Right, but how is this related to prompting, the topic of the video? BTW, I'm using Flux GGUF, not NF4 (kzbin.info/www/bejne/eF62qZKOeKaksM0).
@CryptoPRO-fo5wi2 ай бұрын
@@NextTechandAI Flux NF4 works fine with 8GB VRAM, but when I try to run Flux Q4, it fails. It seems like Q4 requires more VRAM.
@NextTechandAI2 ай бұрын
@CryptoPRO-fo5wi Interesting, in the comments of my GGUF vid there is some positive feedback regarding 8GB VRAM cards and less. In theory NF4 is optimized for speed and GGUF is optimized for size. With 8GB you should easily run Q2_K and Q3_K_S. If this works you could try Q4_K_S, which has higher quality. Anyhow, you should use latest updates for GGUF, there have been several optimizations.
@CryptoPRO-fo5wi2 ай бұрын
Thanks, I'll try Q2_K and Q3_K_S first, then see if Q4_K_S works. I'll also make sure to update GGUF to the latest version for those optimizations.
@CanaldoLipeSt2 ай бұрын
I'm new to AI, I'm having difficulties with Flux when it comes to creating people with facial expressions, for example: sadness, anger, joy, etc. I also have difficulty simulating action/movement, such as: jumping, running, sitting, lying down, etc. Flux doesn't seem to be very friendly with camera movement, it hasn't been easy to get certain camera angles using the prompt. Anyone having this difficulty?
@NextTechandAI2 ай бұрын
Are you using DEV or SCHNELL? SCHNELL reacts better to prompts as described my video. Facial expressions are not always perfect, but sadness, anger, joy look different. Camera settings are indeed difficult. SCHNELL reacts e.g. on expansive focus and narrow focus, but I haven't found a reliable way to determine the camera height.
@CanaldoLipeSt2 ай бұрын
@@NextTechandAI Thanks for answering, I use Dev but I also have Schnell, I didn't know about the difference between them, I'm going to do some tests with the other version and see if I have better results! thanks!
@NextTechandAI2 ай бұрын
@CanaldoLipeSt I'm happy if the tip was helpful. Thanks for your feedback.
@AInfectados2 ай бұрын
Link to your workflow please and can you add a node for LORAS?
@NextTechandAI2 ай бұрын
I've used the standard workflow you can find in Comfy's examples. If you don't know them, in the description you find a link to my FLUX installation video. If you want to try the mentioned GGUF models, the link to my GGUF video is in the description, too.
@AInfectados2 ай бұрын
How i get the CLIP ENCODER FLUX node?
@NextTechandAI2 ай бұрын
In the description you find a link to my video about installing FLUX, there's what you need.