Is SUPER FLUX the Secret to Insane Details?

  Рет қаралды 31,388

Olivio Sarikas

Olivio Sarikas

Күн бұрын

SUPER FLUX is a easy method to highly improve Flux details. It will also save you a lot of time by making test rendering much faster with only 10 steps.
#### Get my Workflow here: / is-super-flux-to-11432...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoff...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorial...
Support me on Patreon: / sarikas

Пікірлер: 216
@OlivioSarikas
@OlivioSarikas Ай бұрын
Get my Workflow here: www.patreon.com/posts/is-super-flux-to-114327248
@Gmlt3000
@Gmlt3000 Ай бұрын
Tnx, but can u post a workflow out of the Patreon, cuz its banned in some countries...
@LouisGedo
@LouisGedo Ай бұрын
👋
@SteelRoo
@SteelRoo Ай бұрын
Feels like just greed if you promote something in a free community then want us to pay. :(
@HiProfileAI
@HiProfileAI Ай бұрын
Nice. The image of the woman still got flux chin and flux plastic skin though. Lol.
@OlivioSarikas
@OlivioSarikas Ай бұрын
@@SteelRoo feels more like you are lazy to build the stuff i show you for free and want everything get handed to you on a silver tablet. this is my job.
@ChadGauthier
@ChadGauthier Ай бұрын
okay the zoom into the eye with the human standing there was actually insane.
@mikaelsvenson
@mikaelsvenson Ай бұрын
Love the flow and super happy to finally give you a small token of support per month!
@RoguishlyHandsome
@RoguishlyHandsome Ай бұрын
How does it handle text generation? Obviously when you don't ask for text, it can be forgiven that any text generated is gibberish, but what happens if you prompt for text generation?
@runebinder
@runebinder Ай бұрын
I followed the video to build it as I'm not subscribed to his patreon, so can't 100% say I've done it all correctly, but text generation on a T-shirt worked fine for me on my version.
@baheth3elmy16
@baheth3elmy16 Ай бұрын
Thank you very much! I recreated the workflow on my RTX 3050 8GB VRAM, 32GB RAM, and the result was WOW. The whole generations process took 10.40 minutes. I repeated the generations using the same nodes but with adding a Lora and changing the model to FP8 and the process took 8 minutes only. FP8 is much faster on my system than the Q8 GGUF.
@skycladsquirrel
@skycladsquirrel Ай бұрын
Nice! Great job Olivio!
@jibcot8541
@jibcot8541 Ай бұрын
It doesn't seem like this would be very fast, as it has 3/4 samplers. But I do like the workflow that focuses on highest quality, it is similar to Nerdy Rodents latest, but he also used custom Scheduler Sigmas to give more control of the generation (like help dealing with the turkey skin).
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Ай бұрын
Because the model doesn't load and unload between samplers, it is equivalent to doing 30 steps on one sampler in time, but it finalizes the overall composition so you can see the results of the composition earlier and cancel the render process if you don't like it. This is mainly how it is faster. Plus it just gives better render results comparitively. I'll have to check out Nerdy Rodents latest workflow too though.
@OlivioSarikas
@OlivioSarikas Ай бұрын
The fast part is to have a full preview of the composition after 10 steps that stays the same. So (depending on GPU) after 10 seconds you know if you will go on or cancle. And then get much better details if you keep going
@TheMadManTV
@TheMadManTV Ай бұрын
This is crazy. I did your method. The results came out very well. Really amazing And I thought There is almost no need to use additional nodes to add details to the face and hands. It's almost finished. Thank you very much for this secret... Always loved by Thai fans.
@bentontramell
@bentontramell Ай бұрын
Every time there is a generation of a Caucasian woman in Flux, the same one shows up. She made her appearance in this video as well.😅
@devnull_
@devnull_ Ай бұрын
Not every time, but if you put any keywords similar to beautiful / very beautiful, you are going to get that same generic look. Flux does similar things with lighting and perspectives too.
@devnull_
@devnull_ Ай бұрын
And if you do get that same face even without asking for beautiful face, you can try lowering your Flux guidance value, or use a LoRA.
@MilesBellas
@MilesBellas Ай бұрын
Overfitted ?
@kasberkhof7958
@kasberkhof7958 Ай бұрын
Meet Mrs. Flux
@MilesBellas
@MilesBellas Ай бұрын
@@bentontramell I tried to post a paragraph about overfitting but it was censored ! .
@victormustin2547
@victormustin2547 Ай бұрын
why do all girls generated by flux have that same chin ?
@Elwaves2925
@Elwaves2925 Ай бұрын
I can't say for definite in regards to the chin but I suspect the females (in particular) were trained off professional models and stoxk photo style models. They all have that cleaned up, professionally airbushed look to them with the base models. You really have to prompt and/or use loras and dedicated checkpoints to get away from that. It may also explain the chin in some way.
@devnull_
@devnull_ Ай бұрын
No, only if you don't know how to prompt properly / tune your generation parameters / don't know how to train a LoRA.
@TomGlenny
@TomGlenny Ай бұрын
@@Elwaves2925 helpful answer
@victormustin2547
@victormustin2547 Ай бұрын
@@devnull_ i know how to do all of this, but i wonder why the default always has this, it's just very specific
@aandarcom
@aandarcom Ай бұрын
I hate that default girls face with Peter Griffins chin too... :) There are two Loras that fix that bony face problem: "Chin Fixer 2000" and "AntiFlux - Chin". Or you can use any asian face Lora, because asian women rarely have that defect. For example "Flux - Asian Beauties - By Devildonia".
@DarioToledo
@DarioToledo Ай бұрын
Basically a 3-pass with a noise injection after the first ksampler and an upscale after the second. It gave me a gridded image because on the first ksampler I've set finish step at 10 but setting 20 for total step (where you set a total of 10), and the second ksampler couldn't converge starting from 10 and finishing to 20 on a total of 20. Which means you are obligated to use a ksampler advanced and not a ksampler custom with a sigmasplit node, because this one does only the first thing I described. How unfortunate. Gonna try this other approach with turbo.
@alpaykasal2902
@alpaykasal2902 29 күн бұрын
GENIUS!
@UnlimitedGRenemy
@UnlimitedGRenemy Ай бұрын
the upscaler model you use is not marked as safe in huggingface
@2008spoonman
@2008spoonman 27 күн бұрын
just download the safetensors variant.
@dermeisterschmidt6367
@dermeisterschmidt6367 Ай бұрын
In general very nice details. But how do you get rid of the banding artifacts?
@IthrielAA
@IthrielAA Ай бұрын
I found with testing just now that having all three set to BETA gives my final result a fake/waxy skin appearance, but switching the middle step to KARRAS kept the realistic skin look throughout.
@gjewell99
@gjewell99 15 күн бұрын
Excellent find. For me, the final result almost resembled an illustration. Your recommendation fixed this perfectly.
@tetsuooshima832
@tetsuooshima832 Ай бұрын
Hey it's very interesting but how do I add a denoise strength to that KSampler Advanced??
@equilibrium964
@equilibrium964 Ай бұрын
The method works extremely well when it comes to details, but for some reason I get fine horizontal stripes in my image after the last KSampler (Upscaling). Does anyone have any idea what is causing this?
@kivisedo
@kivisedo Ай бұрын
I'm getting the same striping, something that hasn't been a problem with other workflows
@equilibrium964
@equilibrium964 Ай бұрын
@@kivisedoIt's the upscaling, Flux gets wacky when you go over 2 megapixels. I solved it by running the image thorugh SDUltimate Tiled Upscale for 4 steps with 0.15 denoising.
@tomaslindholm9780
@tomaslindholm9780 Ай бұрын
@@equilibrium964 Exacly what saw an I did the same to fix. Works fine with a little extra mask blur (16) and tile padding (48) and low denoise.
@jaysire
@jaysire 13 күн бұрын
How does the final image resemble the prompt so well, even though cfg is set to 1.0 for each step?
@KDawg5000
@KDawg5000 Ай бұрын
Did I miss this, or was there talk of time savings? If so, what is the time comparison of this method vs the normal method? Side note: if you want to do fast iterations in Flux, you can render in 512x512. When you get something you like, just Hi-res fix it by 2X and make it 1024x1024. If you set the Denoise to 0.35 and the hires steps to ~15, it looks almost identical to the 512x512 version. (Note: I'm talking about using it in Forge, but you could just activate an upscale if you did it in ComfyUI)
@OlivioSarikas
@OlivioSarikas Ай бұрын
The time saving is to gave a full image after 10 steps and then cancel uf you don't like it. 512x512 gives a different composition and less details, so you will get a worse image in the end
@KDawg5000
@KDawg5000 Ай бұрын
@@OlivioSarikas Ah ok gotcha, you just cancel if you don't like. On my 512x512 testing, I'm not seeing less details or comp changes when using in hi-res fix in forge. I can't post examples here unfortunately.
@hidalgoserra
@hidalgoserra Ай бұрын
Great workflow! works perfectly, one question, i see in the video that the lora is not connected, in case i would like to, where the lora node need to be plugging in? on the input clip of the positive prompt node?
@paleopteryx
@paleopteryx 24 күн бұрын
The workflow seems quite ingenious. I tried it but I keep getting bands/stripes on the final render, after the upscale. These are not so obvious until the last step, but after the upscale they are quite annoying. No matter what I did, I can't get rid of them completely.
@SwampySi
@SwampySi Ай бұрын
Are you making the workflow available to non patreons at some point?
@ernstaugust6428
@ernstaugust6428 Ай бұрын
I recreated it. It's simple.
@armauploads1034
@armauploads1034 14 күн бұрын
And why exactly is the upscaler inserted in between? Unfortunately, I don't understand that. It would be very nice if someone could explain this to me. 🙂
@xibeon
@xibeon Ай бұрын
That's kind a awesome workflow. Thanks. Have you tried this method with SDXL or even SD 1.5? I wonder if the quality would also be also improved on older txt2img generators.
@ricperry1
@ricperry1 Ай бұрын
This is the most useful video you've ever made. And I generally find all of your videos useful. Thanks, Olivio!!
@WillFalcon
@WillFalcon Ай бұрын
4x upscaler makes wierd squared pattern at final image. Do u have any idea how to fix it?
@WillFalcon
@WillFalcon Ай бұрын
I found it out it was resizer issue, so what resizer do u use? Which node I mean, not model.
@runebinder
@runebinder Ай бұрын
Interesting idea and followed the video along to build it, seems to give backgrounds a lot more detail and less of the blurred bokeh effect which I really like. I did get the faint grid patteern I've found with Flux and was suprised for such a small upscale, but added a SD XL pass at the end with a Denoise of 0.1 and that fixed the issue and results in better skin detail :)
@davidwootton7355
@davidwootton7355 Ай бұрын
Once I tracked down all the pieces this works well. One suggestion to make the workflows clearer. Double click on node titles and change them to something descriptive like Old Style Advanced KSampler, Stage 1 Advanced KSampler, etc.
@davidwootton7355
@davidwootton7355 Ай бұрын
Second suggestion - once I got this working I simplified the workflow using the Anything Everywhere and Anything Everywhere3 nodes along with some filtering by node coloring to get rid of all the lines in the graph. Matter of opinion though, to some it might obscure the logic of the workflow.
28 күн бұрын
I really need to test FLUX lol !!! But I'll try your trick with SDXL too. Great video as usual !!!
@earthequalsmissingcurvesqu9359
@earthequalsmissingcurvesqu9359 Ай бұрын
injecting latent noise gives much better results
@archael18
@archael18 Ай бұрын
He had all of them enabled for adding noise
@tetsuooshima832
@tetsuooshima832 Ай бұрын
Wait, how you do that ? I tried anebling leftover noise, noise injection... It has zero effect x)
@earthequalsmissingcurvesqu9359
@earthequalsmissingcurvesqu9359 Ай бұрын
@@tetsuooshima832 latent vision on youtube. (creator of ip-adapter, instand ID, PulID, and many more)
@MannyGonzalez
@MannyGonzalez Ай бұрын
Thanks, Olivio. What about for img2img... how should I handle the denoise of the first pass? I typically use .65 denoise in a regular single pass workflow... Cheers!
@FutonGama
@FutonGama Ай бұрын
nice, looks way better
@alienrenders
@alienrenders Ай бұрын
AI Image generation is basically a glorified denoiser. I'm wondering if too much noise was removed in the first sampler. Would be interested to see the results if you did steps 1-10 of a max of 12 (or up to 15) for example for the first sampler. This way you have an overlap but you're still letting the second sampler not go to waste as much. The way you have it now, the second sampler is nothing more than full image inpaint with a very low denoising strength.
@titerote71
@titerote71 Ай бұрын
With the particularity that the image on which the entire process is based has been generated in an accelerated manner in 10 steps, which increases the possibilities of alterations and malformations in the hands, eyes, anatomy. etc., which will not later be able to be corrected in the remaining refined processes.
@helveticafreezes5010
@helveticafreezes5010 Ай бұрын
Where would you recommend inserting the LORA? beginning, middle, end, all of the above?
@OlivioSarikas
@OlivioSarikas Ай бұрын
try different way. but i used it at the first ksampler, so it doesn't interfer with the rest. It might get to much if you use it on all three
@helveticafreezes5010
@helveticafreezes5010 Ай бұрын
@@OlivioSarikas thank you, I'll try it out!
@pingwuTKD
@pingwuTKD Ай бұрын
​ @OlivioSarikas Thank you so much for this!! Any tips on how to speed this up on an M1 Macbook bro? I have followed this example with the exception of using the safetensor version instead of the gguf version. It's going rather slow, though.
@krakenunbound
@krakenunbound Ай бұрын
Have you tried this with SD35 yet? I've been trying to and having zero luck.
@AltoidDealer
@AltoidDealer Ай бұрын
Honest question, could this accurately be described as a "2x HR Fix"? Instead of 30 step gen, its 10 step, followed by a 10 step "HR fix", followed by a 10 step "HR fix" ?
@OlivioSarikas
@OlivioSarikas Ай бұрын
maybe, but keep in mind that this does 10-20 out of 20. not just 10 extra steps. and that's important for how flux works. because if you would do a second 0-10 out of 10 you would simply get the same image again
@AltoidDealer
@AltoidDealer Ай бұрын
@@OlivioSarikas Thanks for the reply! I’ve been contributing to Forge, and your video has me thinking that HR Fix has untapped potential, that perhaps a “loops” parameter would yield these results. One thing that doesn’t make sense in this workflow (to me) is how much noise is being added to the latent outputs on the next sampling. It’s just a true/false value… I would think this should be similar to “denoising strength” in WebUIs where a lower value adds less noise to the latent output, and a higher value adds more. In regards to your reply… if each of those 3 KSampler nodes generated the total steps (from 0 to X) , without feeding in a latent input, would the result images be drastically different from each other?
@freakguitargod
@freakguitargod Ай бұрын
Hello, thanks for the video, i wanted to ask where you got your upscale model from. I cannot find it in comfy model manager. Thanks.
@2008spoonman
@2008spoonman 28 күн бұрын
Same here, cannot find it. Used search and Google. Nothing.
@pietarikoo
@pietarikoo Ай бұрын
@OlivioSarikas have you tested if FLUX schnell any better with this workflow?
@midgard9552
@midgard9552 Ай бұрын
Is this also available for Forge? :)
@davidwootton7355
@davidwootton7355 Ай бұрын
One question where I'm not fully understanding something. The original empty latent is 1304x768, but in the Image Resize node the resize width and height are 1024x1536. It seems this would switch the image from landscape to portrait mode and distort the image because of the different aspect ratio but all images are about the same, following the aspect ratio of the first image. Why does this work?
@OlivioSarikas
@OlivioSarikas Ай бұрын
no, it uses the scale factor of 0.5, not the pixle values, because it is in "rescale" mode.
@starblaiz1986
@starblaiz1986 Ай бұрын
At first I was like "wait, WHY does this work?" But then I noticed each KSampler has a different seed, and it all clicked, because by changing the seed it triggers it to do different things in each part of the image than it would have and that's what introduces the extra detail. That's actually kinda genius! 🤯 I wonder if adding noise between KSamplers would help too? 🤔 Come to that, I wonder what would happen if you had a different KSampler for every single individual step? 🤯
@henrywang4010
@henrywang4010 Ай бұрын
Would using "Scale to Megapixels" 2.0 be more efficient than going to 4x and than back down to 2x?
@AncientShinrinYoku
@AncientShinrinYoku Ай бұрын
Genius!🎖🎖
@alexshaw5952
@alexshaw5952 Ай бұрын
Hi I got your workflow from Pathan, but I don’t have all the same files lorahs- upscales ect, do have links for them
@nesimatbab
@nesimatbab Ай бұрын
I tried this and ran into "Warning: Ran out of memory when regular VAE encoding, retrying with tiled VAE encoding." on the third ksampler. Its progressing but is taking 30 minutes for 30 steps. Im on a 3090 24gb btw.
@lexmirnov
@lexmirnov Ай бұрын
Did you skip the image resize 0.5 node? I had the same on 3080Ti 16gb.
@tomaslindholm9780
@tomaslindholm9780 Ай бұрын
@@lexmirnov @nesimatbab Thats probably it. Should be less than 160 seconds. And I am using fp16 on my 3090. Still, its no doubt there is a lot of pixels to push considering its still a quite large final image.
@oneanother1
@oneanother1 Ай бұрын
How long does this take? Also, can auto1111 do this as well with other models? How much vram does it use? What about doing the flux control net upscaler? Or doing supir?
@skistenl6566
@skistenl6566 Ай бұрын
Every subsequent image after the first one has a thicker and thicker outline. They look like being drawn by a thick sharpie 😅. Do you have any idea how to fix it?
@OlivioSarikas
@OlivioSarikas Ай бұрын
Use different seeds per ksampler
@skistenl6566
@skistenl6566 Ай бұрын
@@OlivioSarikas 😮 I'm surprised how you know the cause just from 2 sentences and even have a solution. Thank you so much for the quick reply 🫰. I'll check it out
@lurker668
@lurker668 Ай бұрын
I use nf4, fast and high detailed. It's not Lora compatible but I never use ut anyway.
@quipquick1
@quipquick1 Ай бұрын
Hey Oldie, follower from Nepal... Dhantaran
@KalleLaski-p8d
@KalleLaski-p8d Ай бұрын
What are the two nodes called after Unet Loader and DualCLIPLoader?
@KonoShunkan
@KonoShunkan Ай бұрын
Do you mean the small blank ones? They are called Reroute. They are a passive node used to extend the output of a node closer to where it is needed especially where there a lot of connections to the output. The connection passes through them and they have no effect. Their use is optional.
@EH21UTB
@EH21UTB Ай бұрын
Interesting idea. How long does it take to run with 4090? Have you tried skipping a step ie starting at 11 instead of 10? or injecting noise?
@ernstaugust6428
@ernstaugust6428 Ай бұрын
RTX 3060 / 12GB - first two steps take 1 minute each. Last step takes 4.5 minutes with minor improvements compared to step 2
@EH21UTB
@EH21UTB Ай бұрын
@@ernstaugust6428 thanks for the info. I built a similar WF and it's running in about 2 mins total on my box with 4090. I added more steps and a 4th stage so it gets sampeled twice after the upscale.
@MrCreativewax
@MrCreativewax Ай бұрын
I am a bit gutted that you have just shown what I have figured out with SDXL and Flux and do very similar workflows with 3 passes and uncontrolled image back to latent passes to do just this and consistently get better images for it too
@Elwaves2925
@Elwaves2925 Ай бұрын
Is your SDXL workflow available anywhere? I'd be curious to try it out if it is.
@henrismith7472
@henrismith7472 8 күн бұрын
Why are you a bit gutted? I don't get it... Is it because someone had the same idea as you? But why would that be bad? 🤷‍♂️ probably happens more than you'd think
@livinagoodlife
@livinagoodlife Ай бұрын
Joined your Patreon but you dont reply to questions there it seems
@DaveTheAIMad
@DaveTheAIMad Ай бұрын
First test i did went really well, then it started over cooking the image on all subsequent tests. It seems very situational, great when it works, awful when it doesnt.
@OlivioSarikas
@OlivioSarikas Ай бұрын
make sure you have different seeds on the different ksamplers. also, you might have to test different step counts with different models that are community trained
@DaveTheAIMad
@DaveTheAIMad Ай бұрын
@@OlivioSarikas It was the seed issue, didnt see your reply to this comment but did get a reply on your discord. having the same seed for 0 to 30 and 0 to 10 then different seeds for 10 to 20 and 20 to 30 makes the 10 20 30 method work again. cheers
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Ай бұрын
Great video! Thanks for the shoutout. Always happy to help test and improve on things! And the workflow results are looking clean too :D
@MilesBellas
@MilesBellas Ай бұрын
"In German, the word for "windscreen wiper" is "Scheibenwischer." It's a compound word made up of "Scheibe," which means "windscreen" or "windshield," and "Wischer," which means "wiper." So, literally translated, "Scheibenwischer" means "windscreen wiper" or "windshield wiper.""
@tomaslindholm9780
@tomaslindholm9780 Ай бұрын
Seed? Running the same seed for all 3 samplers (set seed widget to input) or generating separate, like you do in this workflow. And the way you run it, is there any point using separate nodes for generation of seed?
@OlivioSarikas
@OlivioSarikas Ай бұрын
seperate, because the same will introduce problems with the image
@GregorioMuraca
@GregorioMuraca Ай бұрын
do you get better results with 3 different seeds? or can you use the same seed in the three steps?
@OlivioSarikas
@OlivioSarikas Ай бұрын
three different seeds seem better, because the same seed will enhance errors over time
@GregorioMuraca
@GregorioMuraca Ай бұрын
@@OlivioSarikas I really like your channel I always learn something new from your video thanks for sharing. 🙏
@sinayagubi8805
@sinayagubi8805 Ай бұрын
Wow! awesome!
@bgtubber
@bgtubber Ай бұрын
Very interesting and creative workflow. I don't use GGUF models though. Is this trick useful for someone like me that uses FP8 models? I did a couple of quick tests with a fine-tuned model (STOIQO New Reality FLUX) and I didn't see any perceivable difference in the amount of details and quality of textures doing this in 3 stages instead of doing all steps in 1 stage.
@OlivioSarikas
@OlivioSarikas Ай бұрын
you can also use it with the other models, but you need to change the model loader
@bgtubber
@bgtubber Ай бұрын
​@@OlivioSarikas I'm afraid you misunderstood my question. Also, I already used the appropriate loader when I did my test with the FP8 model. My point was, If I use a "normal" FP8/FP16 model, is there any benefit to this 3-stage workflow instead of using just 1 ksampler? As I already mentioned, I did not notice a difference in the quality of the images when doing it in 3 stages vs 1 stage when using the FP8 model STOIQO New Reality FLUX.
@Freeak6
@Freeak6 Ай бұрын
I'm not sure why, but I don't have the same results as you. After my first pass (10 steps), the image is already very realistic. After the 2nd pass (20 steps) the image has more details, but the image is overcooked (too much contrast, weird colors), it starts looking like a painting. After the 3rd pass it's basically the same, so the end results after 30 steps is worst than after 10 steps. I used the same models as you (for the GGUF and the upscale). I'm not sure why is that.
@Freeak6
@Freeak6 Ай бұрын
To answer my comment (maybe it can be helpful to others). I initially thought that you were using the same noise_seed for every sampler (which product this overcooked effect). With different noise seed for each sampler, it's much better :)
@OlivioSarikas
@OlivioSarikas Ай бұрын
Make sure you use a different seed on each image
@Freeak6
@Freeak6 Ай бұрын
@@OlivioSarikas Yes, I fixed that, and it works well for characters, but I realized that for scenery, the second pass tends to make the image looks 'fake' (compared to the first pass). I'm losing lots of details (textures), image look too 'clean', with strong contrasts and saturated colors. I'm trying to add some extra conditioning for the 2nd pass to keep it realistic, but no success so far. I'm testing different parameters, but still no success so far.
@davidmanas95
@davidmanas95 24 күн бұрын
@OlivioSarikas, can you do a version with lora for flux?? Please!
@OlivioSarikas
@OlivioSarikas 24 күн бұрын
this is for flux
@davidmanas95
@davidmanas95 24 күн бұрын
@@OlivioSarikas Yes, I mean your workflow, can u do a version with lora? I dont know if I have to put a lora for each Ksampler
@OlivioSarikas
@OlivioSarikas 22 күн бұрын
@@davidmanas95 you mean this? kzbin.info/www/bejne/oJfFop-Jlrd8hqs
@97BuckeyeGuy
@97BuckeyeGuy Ай бұрын
So weird. My output looks like absolute garbage. And my workflow is running about 4 times slower than usual. Did Comfy update something today? 😢
@vannoo67
@vannoo67 Ай бұрын
Are you running on Windows? I recently discovered that Nvidia drivers for Windows (since Oct 2023) allow system RAM to be used to supplement GPU VRAM. I have found that it runs about 4 times slower. (But on the flip side, it lets me do things I wouldn't have been able to with only 16G VRAM)
@ToxicPeli
@ToxicPeli Ай бұрын
Shouldnt sampler 2 start at 11 steps and 3rd at 21?
@2008spoonman
@2008spoonman 28 күн бұрын
Interesting..! 😎
@someniac5364
@someniac5364 Ай бұрын
love the breaking bad reference!!!
@archael18
@archael18 Ай бұрын
I'm about to go to bed but now I'll have trouble sleeping since I want to try that first thing in the morning lol 😆 Thanks a lot, regardless of the insomnia! 💪
@mashedpotatoes7068
@mashedpotatoes7068 Ай бұрын
No offense, but by principle I hate subscriptions and finding out there is a paywall in the end!
@OlivioSarikas
@OlivioSarikas Ай бұрын
It's a reward for people who support me. I show the full workflow for free in the video
@chilldowninaninstant
@chilldowninaninstant Ай бұрын
All of the nodes are visible and explained, there is no paywall or secret. Don't be lazy create your own workflows with what you have learned and expand upon them its up to you.
@azmodel
@azmodel Ай бұрын
"No offense but even though I found your work very useful, and I would definitely benefit from it, I don't see why should I recognize you any way or form"
@mashedpotatoes7068
@mashedpotatoes7068 Ай бұрын
@@OlivioSarikas Workflow much appreciated but I still hate the system with subscriptions and paywalls! Also It's not laziness! If I had to manually create every workflow I encounter, it would be a real headache! :)
@devnull_
@devnull_ Ай бұрын
@@chilldowninaninstant lol yes that is super lazy, with gen AI one doesn't even have to learn to draw or paint for years, simply learn to operate a software and understand some concepts to get nice looking images. And here a youtuber spoon feeds people how to do some specific thing, and still some folks complain.
@douchymcdouche169
@douchymcdouche169 Ай бұрын
I give this video a Mmuah! out of 10.
@RandyLittleStudios
@RandyLittleStudios Ай бұрын
0-10 in computer calculations is 11 steps. As is 10-20. 10 11 12 13 14 15 16 17 18 19 20. That's 11 numbers, so your ksampler should be set to 11. Otherwise, you never reach the final step. Unless you don't want to reach the final step. Also isn't this exactly how SDXL refiner works?
@2008spoonman
@2008spoonman 28 күн бұрын
Gonna try your theory tomorrow. Interesting!
@MichauxJHyatt
@MichauxJHyatt Ай бұрын
Love your work and this dope workflow. I'm calling it Flux Cascade in my build. Thx for sharing 😃🤙🏾
@mariocano7263
@mariocano7263 Ай бұрын
Could this work with img2img?
@OlivioSarikas
@OlivioSarikas Ай бұрын
technically yes, but it might change the details because of the 10-step first render. but give it a try
@ricperry1
@ricperry1 Ай бұрын
@@OlivioSarikas If you start with the 3rd stage (or something similar) maybe this can be used like Magnific?? Just a thought. I'm thinking, upscale, inject noise, and denoise from a late stage??
@Sedokun
@Sedokun Ай бұрын
8:22 Thank our sponsors, Rionlard and Toribor
@WallyMahar
@WallyMahar Ай бұрын
Okay, you are now the next one patreontube I can afford to support! Well done!! Btw, The next time you want to say that, it's called a dirtypull windowslide.
@theh1ve
@theh1ve Ай бұрын
This is actually an impressive workflow solution great job!
@thiagomucci9860
@thiagomucci9860 Ай бұрын
@mashedpotatoes7068
@mashedpotatoes7068 Ай бұрын
Sorry for that but you really need to check it out!
@Shingo_AI_Art
@Shingo_AI_Art Ай бұрын
Still slow if you don't have a 4090, they need to make a more accessible model
@weeliano
@weeliano Ай бұрын
Amazing workflow! Very easy to follow and thank you for walking through each node step by step. I managed to replicate your results!
@havemoney
@havemoney Ай бұрын
They are promising a new SANA model soon, what do you know?
@Dron008
@Dron008 Ай бұрын
Normal face as for witches )
@ImmacHn
@ImmacHn Ай бұрын
So, iterative upscaling?
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Ай бұрын
More like iterative resampling. Upscale is optional.
@timothywells8589
@timothywells8589 Ай бұрын
This is insane! I had the same idea and was working on a workflow when this popped up. This gives way better results in a 10th of the time my solution was getting! Thank you so much for sharing this Olivio
@ruslanagapasa
@ruslanagapasa Ай бұрын
first!!!
@halfof333
@halfof333 Ай бұрын
Second lol
@leadlayer
@leadlayer Ай бұрын
Why did you choose to decode the image and use an upscaler model on that, rather than upscale the latent, inject a small amount of noise and then use that for your 3rd sampling stage?
@mariokotlar303
@mariokotlar303 Ай бұрын
Pixel upscalers are more powerful than latent upscalers
@AB-wf8ek
@AB-wf8ek 10 күн бұрын
I'm getting into Flux kind of late, but this is a super helpful trick. Getting fast previews is key. I was using turbo and doing smaller renders to test different settings, but this method is much better. I don't know if people are appreciating the algebra on the upscales: 30steps + Upscale vs 20steps + Upscale + 10steps It's the same amount of processing, but this method puts 10 generative steps after the upscale, which is the trick to better upscales in general. Thanks for sharing!
@Artazar777
@Artazar777 Ай бұрын
Someone share the workflow, I don't want to spend money on a subscription for the sake of one file)
@EH21UTB
@EH21UTB Ай бұрын
just watch it and build it, not hard at all. When you do this you will learn how to make stuff yourself instead of begging for handouts.
@generichuman_
@generichuman_ Ай бұрын
Ugh, just watch it and build it. stop being lazy. Or spend 3 dollars... you don't have 3 dollars?
@EH21UTB
@EH21UTB Ай бұрын
@@the_one_and_carpool he literally shows you the work flow. By building it yourself you learn. I can't believe you need someone to give it to you, it's so simple. And you're wrong, he's asking a really fair price for his patreon most ask much more. The really wrong thing is that you expect people to give you stuff for free when you don't offer anything except complaints.
@deadlyrobot5179
@deadlyrobot5179 Ай бұрын
Nobody can fix flux with its same face and CG issue. Back to SDXL
@KDawg5000
@KDawg5000 Ай бұрын
If you use the "Amateur Photography" lora, it will create more normal/unique people
@archael18
@archael18 Ай бұрын
Lol see ya
@cowlevelcrypto2346
@cowlevelcrypto2346 Ай бұрын
@AntonioSorrentini
@AntonioSorrentini Ай бұрын
This is pure genius, thank you very much Olivio.
@SeryphCherubThrone
@SeryphCherubThrone Ай бұрын
This works incredibly well, also combined with adjusting the early block weights in lora one can achieve some very fine detail at distance. Thanks Olivio
@gammingtoch259
@gammingtoch259 Ай бұрын
How can replicate it using the "SamplerCustom" or "SamplerCustom Advanced"?? Thank u very much bro !
@EH21UTB
@EH21UTB Ай бұрын
you need the advanced sampler to be able to start at a middle step. % denoise lets you stop at a position but doesn't let you start in the middle so you have to use a sampler node that allows you to set the start step. Just use the advanced sampler and add the flux guidance node after your text encoder.
@anigroove
@anigroove Ай бұрын
Love it!
@AlistairKarim
@AlistairKarim Ай бұрын
Dude, you do deliver. Real impressive neat trick.
@OlavAlexanderMjelde
@OlavAlexanderMjelde Ай бұрын
Cool, I will have to try this!
@NotThatOlivia
@NotThatOlivia Ай бұрын
nice method (to your madness - as you said by yourself couple of times)!
@springheeledjackofthegurdi2117
@springheeledjackofthegurdi2117 Ай бұрын
how taxing is this on hardware?
@OlivioSarikas
@OlivioSarikas Ай бұрын
not more than Flux usually. But because you can cancle after the first ksampler if you don't like the result, you actually save a lot of time and power
@OlivioSarikas
@OlivioSarikas Ай бұрын
@@pingwuTKD sorry, i don't have a mac. but you can ask in my discord
@chipulaja
@chipulaja Ай бұрын
The workflow shown in the video can be reproduced manually, I've tried it myself. So if you want to learn from scratch, you can follow the workflow as demonstrated in the video. However, if you prefer a simpler option and want to support, you can check out the provided link. By the way, thank you Olivio Sarikas.
@TheGanntak
@TheGanntak Ай бұрын
Insane detail.
@avant_la_Fin
@avant_la_Fin Ай бұрын
there are 50 versions, their stuff must be for startreck or linux fans
SUPER FLUX Turbo - Better, Faster, More Details!
9:49
Olivio Sarikas
Рет қаралды 18 М.
Stable ProjectorZ is the GAME CHANGER for AI!
8:06
Olivio Sarikas
Рет қаралды 61 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 20 МЛН
Noodles Eating Challenge, So Magical! So Much Fun#Funnyfamily #Partygames #Funny
00:33
Flux: all samplers, schedulers, guidance, shift tested!
24:33
Latent Vision
Рет қаралды 30 М.
Consistent Images FAST Without Coding! (FLUX.1)
18:56
Jonathan's Hub Jam
Рет қаралды 2 М.
FASTEST Way to Get FULL Quality in 8 Seconds - FLUX TURBO Lora!
10:09
Olivio Sarikas
Рет қаралды 24 М.
Why do Studios Ignore Blender?
8:52
Film Stop
Рет қаралды 204 М.
How to use Flux Redux (Ipadapter for Flux)
6:21
Sebastian Kamph
Рет қаралды 6 М.
What Makes FLUX Shuttle 3 So AMAZING?
7:54
Olivio Sarikas
Рет қаралды 20 М.
Want Better Flux Images? I've Got the Secret! Model Sampling Flux
10:43
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 20 МЛН