Is SUPER FLUX the Secret to Insane Details?

  Рет қаралды 35,678

Olivio Sarikas

Olivio Sarikas

Күн бұрын

SUPER FLUX is a easy method to highly improve Flux details. It will also save you a lot of time by making test rendering much faster with only 10 steps.
#### Get my Workflow here: / is-super-flux-to-11432...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoff...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorial...
Support me on Patreon: / sarikas

Пікірлер: 222
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
Get my Workflow here: www.patreon.com/posts/is-super-flux-to-114327248
@Gmlt3000
@Gmlt3000 3 ай бұрын
Tnx, but can u post a workflow out of the Patreon, cuz its banned in some countries...
@LouisGedo
@LouisGedo 3 ай бұрын
👋
@SteelRoo
@SteelRoo 3 ай бұрын
Feels like just greed if you promote something in a free community then want us to pay. :(
@HiProfileAI
@HiProfileAI 3 ай бұрын
Nice. The image of the woman still got flux chin and flux plastic skin though. Lol.
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
@@SteelRoo feels more like you are lazy to build the stuff i show you for free and want everything get handed to you on a silver tablet. this is my job.
@ChadGauthier
@ChadGauthier 3 ай бұрын
okay the zoom into the eye with the human standing there was actually insane.
@ricperry1
@ricperry1 3 ай бұрын
This is the most useful video you've ever made. And I generally find all of your videos useful. Thanks, Olivio!!
@mikaelsvenson
@mikaelsvenson 3 ай бұрын
Love the flow and super happy to finally give you a small token of support per month!
@skycladsquirrel
@skycladsquirrel 3 ай бұрын
Nice! Great job Olivio!
@IthrielAA
@IthrielAA 3 ай бұрын
I found with testing just now that having all three set to BETA gives my final result a fake/waxy skin appearance, but switching the middle step to KARRAS kept the realistic skin look throughout.
@gjewell99
@gjewell99 3 ай бұрын
Excellent find. For me, the final result almost resembled an illustration. Your recommendation fixed this perfectly.
@alpaykasal2902
@alpaykasal2902 3 ай бұрын
GENIUS!
@TheMadManTV
@TheMadManTV 3 ай бұрын
This is crazy. I did your method. The results came out very well. Really amazing And I thought There is almost no need to use additional nodes to add details to the face and hands. It's almost finished. Thank you very much for this secret... Always loved by Thai fans.
@tripleheadedmonkey420
@tripleheadedmonkey420 3 ай бұрын
Great video! Thanks for the shoutout. Always happy to help test and improve on things! And the workflow results are looking clean too :D
@baheth3elmy16
@baheth3elmy16 3 ай бұрын
Thank you very much! I recreated the workflow on my RTX 3050 8GB VRAM, 32GB RAM, and the result was WOW. The whole generations process took 10.40 minutes. I repeated the generations using the same nodes but with adding a Lora and changing the model to FP8 and the process took 8 minutes only. FP8 is much faster on my system than the Q8 GGUF.
3 ай бұрын
I really need to test FLUX lol !!! But I'll try your trick with SDXL too. Great video as usual !!!
@bentontramell
@bentontramell 3 ай бұрын
Every time there is a generation of a Caucasian woman in Flux, the same one shows up. She made her appearance in this video as well.😅
@devnull_
@devnull_ 3 ай бұрын
Not every time, but if you put any keywords similar to beautiful / very beautiful, you are going to get that same generic look. Flux does similar things with lighting and perspectives too.
@devnull_
@devnull_ 3 ай бұрын
And if you do get that same face even without asking for beautiful face, you can try lowering your Flux guidance value, or use a LoRA.
@MilesBellas
@MilesBellas 3 ай бұрын
Overfitted ?
@kasberkhof7958
@kasberkhof7958 3 ай бұрын
Meet Mrs. Flux
@MilesBellas
@MilesBellas 3 ай бұрын
@@bentontramell I tried to post a paragraph about overfitting but it was censored ! .
@runebinder
@runebinder 3 ай бұрын
Interesting idea and followed the video along to build it, seems to give backgrounds a lot more detail and less of the blurred bokeh effect which I really like. I did get the faint grid patteern I've found with Flux and was suprised for such a small upscale, but added a SD XL pass at the end with a Denoise of 0.1 and that fixed the issue and results in better skin detail :)
@victormustin2547
@victormustin2547 3 ай бұрын
why do all girls generated by flux have that same chin ?
@Elwaves2925
@Elwaves2925 3 ай бұрын
I can't say for definite in regards to the chin but I suspect the females (in particular) were trained off professional models and stoxk photo style models. They all have that cleaned up, professionally airbushed look to them with the base models. You really have to prompt and/or use loras and dedicated checkpoints to get away from that. It may also explain the chin in some way.
@devnull_
@devnull_ 3 ай бұрын
No, only if you don't know how to prompt properly / tune your generation parameters / don't know how to train a LoRA.
@TomGlenny
@TomGlenny 3 ай бұрын
@@Elwaves2925 helpful answer
@victormustin2547
@victormustin2547 3 ай бұрын
@@devnull_ i know how to do all of this, but i wonder why the default always has this, it's just very specific
@aandarcom
@aandarcom 3 ай бұрын
I hate that default girls face with Peter Griffins chin too... :) There are two Loras that fix that bony face problem: "Chin Fixer 2000" and "AntiFlux - Chin". Or you can use any asian face Lora, because asian women rarely have that defect. For example "Flux - Asian Beauties - By Devildonia".
@FutonGama
@FutonGama 3 ай бұрын
nice, looks way better
@equilibrium964
@equilibrium964 3 ай бұрын
The method works extremely well when it comes to details, but for some reason I get fine horizontal stripes in my image after the last KSampler (Upscaling). Does anyone have any idea what is causing this?
@kivisedo
@kivisedo 3 ай бұрын
I'm getting the same striping, something that hasn't been a problem with other workflows
@equilibrium964
@equilibrium964 3 ай бұрын
@@kivisedoIt's the upscaling, Flux gets wacky when you go over 2 megapixels. I solved it by running the image thorugh SDUltimate Tiled Upscale for 4 steps with 0.15 denoising.
@tomaslindholm9780
@tomaslindholm9780 3 ай бұрын
@@equilibrium964 Exacly what saw an I did the same to fix. Works fine with a little extra mask blur (16) and tile padding (48) and low denoise.
@timothywells8589
@timothywells8589 3 ай бұрын
This is insane! I had the same idea and was working on a workflow when this popped up. This gives way better results in a 10th of the time my solution was getting! Thank you so much for sharing this Olivio
@AncientShinrinYoku
@AncientShinrinYoku 3 ай бұрын
Genius!🎖🎖
@jibcot8541
@jibcot8541 3 ай бұрын
It doesn't seem like this would be very fast, as it has 3/4 samplers. But I do like the workflow that focuses on highest quality, it is similar to Nerdy Rodents latest, but he also used custom Scheduler Sigmas to give more control of the generation (like help dealing with the turkey skin).
@tripleheadedmonkey420
@tripleheadedmonkey420 3 ай бұрын
Because the model doesn't load and unload between samplers, it is equivalent to doing 30 steps on one sampler in time, but it finalizes the overall composition so you can see the results of the composition earlier and cancel the render process if you don't like it. This is mainly how it is faster. Plus it just gives better render results comparitively. I'll have to check out Nerdy Rodents latest workflow too though.
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
The fast part is to have a full preview of the composition after 10 steps that stays the same. So (depending on GPU) after 10 seconds you know if you will go on or cancle. And then get much better details if you keep going
@davidwootton7355
@davidwootton7355 3 ай бұрын
Once I tracked down all the pieces this works well. One suggestion to make the workflows clearer. Double click on node titles and change them to something descriptive like Old Style Advanced KSampler, Stage 1 Advanced KSampler, etc.
@davidwootton7355
@davidwootton7355 3 ай бұрын
Second suggestion - once I got this working I simplified the workflow using the Anything Everywhere and Anything Everywhere3 nodes along with some filtering by node coloring to get rid of all the lines in the graph. Matter of opinion though, to some it might obscure the logic of the workflow.
@archael18
@archael18 3 ай бұрын
I'm about to go to bed but now I'll have trouble sleeping since I want to try that first thing in the morning lol 😆 Thanks a lot, regardless of the insomnia! 💪
@someniac5364
@someniac5364 3 ай бұрын
love the breaking bad reference!!!
@RoguishlyHandsome
@RoguishlyHandsome 3 ай бұрын
How does it handle text generation? Obviously when you don't ask for text, it can be forgiven that any text generated is gibberish, but what happens if you prompt for text generation?
@runebinder
@runebinder 3 ай бұрын
I followed the video to build it as I'm not subscribed to his patreon, so can't 100% say I've done it all correctly, but text generation on a T-shirt worked fine for me on my version.
@krebbichris818
@krebbichris818 3 ай бұрын
injecting latent noise gives much better results
@archael18
@archael18 3 ай бұрын
He had all of them enabled for adding noise
@tetsuooshima832
@tetsuooshima832 3 ай бұрын
Wait, how you do that ? I tried anebling leftover noise, noise injection... It has zero effect x)
@krebbichris818
@krebbichris818 3 ай бұрын
@@tetsuooshima832 latent vision on youtube. (creator of ip-adapter, instand ID, PulID, and many more)
@dermeisterschmidt6367
@dermeisterschmidt6367 3 ай бұрын
In general very nice details. But how do you get rid of the banding artifacts?
@AltoidDealer
@AltoidDealer 3 ай бұрын
Honest question, could this accurately be described as a "2x HR Fix"? Instead of 30 step gen, its 10 step, followed by a 10 step "HR fix", followed by a 10 step "HR fix" ?
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
maybe, but keep in mind that this does 10-20 out of 20. not just 10 extra steps. and that's important for how flux works. because if you would do a second 0-10 out of 10 you would simply get the same image again
@AltoidDealer
@AltoidDealer 3 ай бұрын
@@OlivioSarikas Thanks for the reply! I’ve been contributing to Forge, and your video has me thinking that HR Fix has untapped potential, that perhaps a “loops” parameter would yield these results. One thing that doesn’t make sense in this workflow (to me) is how much noise is being added to the latent outputs on the next sampling. It’s just a true/false value… I would think this should be similar to “denoising strength” in WebUIs where a lower value adds less noise to the latent output, and a higher value adds more. In regards to your reply… if each of those 3 KSampler nodes generated the total steps (from 0 to X) , without feeding in a latent input, would the result images be drastically different from each other?
@tetsuooshima832
@tetsuooshima832 3 ай бұрын
Hey it's very interesting but how do I add a denoise strength to that KSampler Advanced??
@jaysire
@jaysire 3 ай бұрын
How does the final image resemble the prompt so well, even though cfg is set to 1.0 for each step?
@SeryphCherubThrone
@SeryphCherubThrone 3 ай бұрын
This works incredibly well, also combined with adjusting the early block weights in lora one can achieve some very fine detail at distance. Thanks Olivio
@sinayagubi8805
@sinayagubi8805 3 ай бұрын
Wow! awesome!
@quipquick1
@quipquick1 3 ай бұрын
Hey Oldie, follower from Nepal... Dhantaran
@UnlimitedGRenemy
@UnlimitedGRenemy 3 ай бұрын
the upscaler model you use is not marked as safe in huggingface
@2008spoonman
@2008spoonman 3 ай бұрын
just download the safetensors variant.
@theh1ve
@theh1ve 3 ай бұрын
This is actually an impressive workflow solution great job!
@krakenunbound
@krakenunbound 3 ай бұрын
Have you tried this with SD35 yet? I've been trying to and having zero luck.
@MichauxJHyatt
@MichauxJHyatt 3 ай бұрын
Love your work and this dope workflow. I'm calling it Flux Cascade in my build. Thx for sharing 😃🤙🏾
@WallyMahar
@WallyMahar 3 ай бұрын
Okay, you are now the next one patreontube I can afford to support! Well done!! Btw, The next time you want to say that, it's called a dirtypull windowslide.
@WillFalcon
@WillFalcon 3 ай бұрын
4x upscaler makes wierd squared pattern at final image. Do u have any idea how to fix it?
@WillFalcon
@WillFalcon 3 ай бұрын
I found it out it was resizer issue, so what resizer do u use? Which node I mean, not model.
@hidalgoserra
@hidalgoserra 3 ай бұрын
Great workflow! works perfectly, one question, i see in the video that the lora is not connected, in case i would like to, where the lora node need to be plugging in? on the input clip of the positive prompt node?
@AntonioSorrentini
@AntonioSorrentini 3 ай бұрын
This is pure genius, thank you very much Olivio.
@weeliano
@weeliano 3 ай бұрын
Amazing workflow! Very easy to follow and thank you for walking through each node step by step. I managed to replicate your results!
@SlickSonicTitan
@SlickSonicTitan 28 күн бұрын
Interesting, im about to delve into FLUX and will try this. It seems to work well by your examples, except the woman close up portrait. For me, the lips of the super flux version are way worse, like she's been in the desert for 48 hours or had a lip swap with a botox loving wax works model.
@freakguitargod
@freakguitargod 3 ай бұрын
Hello, thanks for the video, i wanted to ask where you got your upscale model from. I cannot find it in comfy model manager. Thanks.
@2008spoonman
@2008spoonman 3 ай бұрын
Same here, cannot find it. Used search and Google. Nothing.
@armauploads1034
@armauploads1034 3 ай бұрын
And why exactly is the upscaler inserted in between? Unfortunately, I don't understand that. It would be very nice if someone could explain this to me. 🙂
@skistenl6566
@skistenl6566 3 ай бұрын
Every subsequent image after the first one has a thicker and thicker outline. They look like being drawn by a thick sharpie 😅. Do you have any idea how to fix it?
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
Use different seeds per ksampler
@skistenl6566
@skistenl6566 3 ай бұрын
@@OlivioSarikas 😮 I'm surprised how you know the cause just from 2 sentences and even have a solution. Thank you so much for the quick reply 🫰. I'll check it out
@RDUBTutorial
@RDUBTutorial 29 күн бұрын
Plus this have been built using the flux sampler instead of the sampler you used I this? I couldn’t figure out how to make that work.
@cgvfx.ai2CJ
@cgvfx.ai2CJ 15 күн бұрын
Here's the English translation: "In my case, I tested it with humans, but the waxy skin issue persists. (I switched to the intermediate Karras step mentioned by someone below.) Are there any other solutions or reference videos you could recommend?"
@xibeon
@xibeon 3 ай бұрын
That's kind a awesome workflow. Thanks. Have you tried this method with SDXL or even SD 1.5? I wonder if the quality would also be also improved on older txt2img generators.
@pingwuTKD
@pingwuTKD 3 ай бұрын
​ @OlivioSarikas Thank you so much for this!! Any tips on how to speed this up on an M1 Macbook bro? I have followed this example with the exception of using the safetensor version instead of the gguf version. It's going rather slow, though.
@nesimatbab
@nesimatbab 3 ай бұрын
I tried this and ran into "Warning: Ran out of memory when regular VAE encoding, retrying with tiled VAE encoding." on the third ksampler. Its progressing but is taking 30 minutes for 30 steps. Im on a 3090 24gb btw.
@lexmirnov
@lexmirnov 3 ай бұрын
Did you skip the image resize 0.5 node? I had the same on 3080Ti 16gb.
@tomaslindholm9780
@tomaslindholm9780 3 ай бұрын
@@lexmirnov @nesimatbab Thats probably it. Should be less than 160 seconds. And I am using fp16 on my 3090. Still, its no doubt there is a lot of pixels to push considering its still a quite large final image.
@davidwootton7355
@davidwootton7355 3 ай бұрын
One question where I'm not fully understanding something. The original empty latent is 1304x768, but in the Image Resize node the resize width and height are 1024x1536. It seems this would switch the image from landscape to portrait mode and distort the image because of the different aspect ratio but all images are about the same, following the aspect ratio of the first image. Why does this work?
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
no, it uses the scale factor of 0.5, not the pixle values, because it is in "rescale" mode.
@MannyGonzalez
@MannyGonzalez 3 ай бұрын
Thanks, Olivio. What about for img2img... how should I handle the denoise of the first pass? I typically use .65 denoise in a regular single pass workflow... Cheers!
@DarioToledo
@DarioToledo 3 ай бұрын
Basically a 3-pass with a noise injection after the first ksampler and an upscale after the second. It gave me a gridded image because on the first ksampler I've set finish step at 10 but setting 20 for total step (where you set a total of 10), and the second ksampler couldn't converge starting from 10 and finishing to 20 on a total of 20. Which means you are obligated to use a ksampler advanced and not a ksampler custom with a sigmasplit node, because this one does only the first thing I described. How unfortunate. Gonna try this other approach with turbo.
@KDawg5000
@KDawg5000 3 ай бұрын
Did I miss this, or was there talk of time savings? If so, what is the time comparison of this method vs the normal method? Side note: if you want to do fast iterations in Flux, you can render in 512x512. When you get something you like, just Hi-res fix it by 2X and make it 1024x1024. If you set the Denoise to 0.35 and the hires steps to ~15, it looks almost identical to the 512x512 version. (Note: I'm talking about using it in Forge, but you could just activate an upscale if you did it in ComfyUI)
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
The time saving is to gave a full image after 10 steps and then cancel uf you don't like it. 512x512 gives a different composition and less details, so you will get a worse image in the end
@KDawg5000
@KDawg5000 3 ай бұрын
@@OlivioSarikas Ah ok gotcha, you just cancel if you don't like. On my 512x512 testing, I'm not seeing less details or comp changes when using in hi-res fix in forge. I can't post examples here unfortunately.
@97BuckeyeGuy
@97BuckeyeGuy 3 ай бұрын
So weird. My output looks like absolute garbage. And my workflow is running about 4 times slower than usual. Did Comfy update something today? 😢
@vannoo67
@vannoo67 3 ай бұрын
Are you running on Windows? I recently discovered that Nvidia drivers for Windows (since Oct 2023) allow system RAM to be used to supplement GPU VRAM. I have found that it runs about 4 times slower. (But on the flip side, it lets me do things I wouldn't have been able to with only 16G VRAM)
@alexshaw5952
@alexshaw5952 3 ай бұрын
Hi I got your workflow from Pathan, but I don’t have all the same files lorahs- upscales ect, do have links for them
@davidmanas95
@davidmanas95 3 ай бұрын
@OlivioSarikas, can you do a version with lora for flux?? Please!
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
this is for flux
@davidmanas95
@davidmanas95 3 ай бұрын
@@OlivioSarikas Yes, I mean your workflow, can u do a version with lora? I dont know if I have to put a lora for each Ksampler
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
@@davidmanas95 you mean this? kzbin.info/www/bejne/oJfFop-Jlrd8hqs
@paleopteryx
@paleopteryx 3 ай бұрын
The workflow seems quite ingenious. I tried it but I keep getting bands/stripes on the final render, after the upscale. These are not so obvious until the last step, but after the upscale they are quite annoying. No matter what I did, I can't get rid of them completely.
@adriatana
@adriatana Ай бұрын
Would that be the split sigmas technique?
@olternaut
@olternaut Ай бұрын
I've given up on trying to use negative prompts with Flux.
@tomaslindholm9780
@tomaslindholm9780 3 ай бұрын
Seed? Running the same seed for all 3 samplers (set seed widget to input) or generating separate, like you do in this workflow. And the way you run it, is there any point using separate nodes for generation of seed?
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
seperate, because the same will introduce problems with the image
@oneanother1
@oneanother1 3 ай бұрын
How long does this take? Also, can auto1111 do this as well with other models? How much vram does it use? What about doing the flux control net upscaler? Or doing supir?
@helveticafreezes5010
@helveticafreezes5010 3 ай бұрын
Where would you recommend inserting the LORA? beginning, middle, end, all of the above?
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
try different way. but i used it at the first ksampler, so it doesn't interfer with the rest. It might get to much if you use it on all three
@helveticafreezes5010
@helveticafreezes5010 3 ай бұрын
@@OlivioSarikas thank you, I'll try it out!
@OlavAlexanderMjelde
@OlavAlexanderMjelde 3 ай бұрын
Cool, I will have to try this!
@midgard9552
@midgard9552 3 ай бұрын
Is this also available for Forge? :)
@KalleLaski-p8d
@KalleLaski-p8d 3 ай бұрын
What are the two nodes called after Unet Loader and DualCLIPLoader?
@KonoShunkan
@KonoShunkan 3 ай бұрын
Do you mean the small blank ones? They are called Reroute. They are a passive node used to extend the output of a node closer to where it is needed especially where there a lot of connections to the output. The connection passes through them and they have no effect. Their use is optional.
@henrywang4010
@henrywang4010 3 ай бұрын
Would using "Scale to Megapixels" 2.0 be more efficient than going to 4x and than back down to 2x?
@GregorioMuraca
@GregorioMuraca 3 ай бұрын
do you get better results with 3 different seeds? or can you use the same seed in the three steps?
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
three different seeds seem better, because the same seed will enhance errors over time
@GregorioMuraca
@GregorioMuraca 3 ай бұрын
@@OlivioSarikas I really like your channel I always learn something new from your video thanks for sharing. 🙏
@AlistairKarim
@AlistairKarim 3 ай бұрын
Dude, you do deliver. Real impressive neat trick.
@ToxicPeli
@ToxicPeli 3 ай бұрын
Shouldnt sampler 2 start at 11 steps and 3rd at 21?
@2008spoonman
@2008spoonman 3 ай бұрын
Interesting..! 😎
@EH21UTB
@EH21UTB 3 ай бұрын
Interesting idea. How long does it take to run with 4090? Have you tried skipping a step ie starting at 11 instead of 10? or injecting noise?
@ernstaugust6428
@ernstaugust6428 3 ай бұрын
RTX 3060 / 12GB - first two steps take 1 minute each. Last step takes 4.5 minutes with minor improvements compared to step 2
@EH21UTB
@EH21UTB 3 ай бұрын
@@ernstaugust6428 thanks for the info. I built a similar WF and it's running in about 2 mins total on my box with 4090. I added more steps and a 4th stage so it gets sampeled twice after the upscale.
@lurker668
@lurker668 3 ай бұрын
I use nf4, fast and high detailed. It's not Lora compatible but I never use ut anyway.
@bgtubber
@bgtubber 3 ай бұрын
Very interesting and creative workflow. I don't use GGUF models though. Is this trick useful for someone like me that uses FP8 models? I did a couple of quick tests with a fine-tuned model (STOIQO New Reality FLUX) and I didn't see any perceivable difference in the amount of details and quality of textures doing this in 3 stages instead of doing all steps in 1 stage.
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
you can also use it with the other models, but you need to change the model loader
@bgtubber
@bgtubber 3 ай бұрын
​@@OlivioSarikas I'm afraid you misunderstood my question. Also, I already used the appropriate loader when I did my test with the FP8 model. My point was, If I use a "normal" FP8/FP16 model, is there any benefit to this 3-stage workflow instead of using just 1 ksampler? As I already mentioned, I did not notice a difference in the quality of the images when doing it in 3 stages vs 1 stage when using the FP8 model STOIQO New Reality FLUX.
@Freeak6
@Freeak6 3 ай бұрын
I'm not sure why, but I don't have the same results as you. After my first pass (10 steps), the image is already very realistic. After the 2nd pass (20 steps) the image has more details, but the image is overcooked (too much contrast, weird colors), it starts looking like a painting. After the 3rd pass it's basically the same, so the end results after 30 steps is worst than after 10 steps. I used the same models as you (for the GGUF and the upscale). I'm not sure why is that.
@Freeak6
@Freeak6 3 ай бұрын
To answer my comment (maybe it can be helpful to others). I initially thought that you were using the same noise_seed for every sampler (which product this overcooked effect). With different noise seed for each sampler, it's much better :)
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
Make sure you use a different seed on each image
@Freeak6
@Freeak6 3 ай бұрын
@@OlivioSarikas Yes, I fixed that, and it works well for characters, but I realized that for scenery, the second pass tends to make the image looks 'fake' (compared to the first pass). I'm losing lots of details (textures), image look too 'clean', with strong contrasts and saturated colors. I'm trying to add some extra conditioning for the 2nd pass to keep it realistic, but no success so far. I'm testing different parameters, but still no success so far.
@MrCreativewax
@MrCreativewax 3 ай бұрын
I am a bit gutted that you have just shown what I have figured out with SDXL and Flux and do very similar workflows with 3 passes and uncontrolled image back to latent passes to do just this and consistently get better images for it too
@Elwaves2925
@Elwaves2925 3 ай бұрын
Is your SDXL workflow available anywhere? I'd be curious to try it out if it is.
@theyreatinthecatsndogs
@theyreatinthecatsndogs 2 ай бұрын
Why are you a bit gutted? I don't get it... Is it because someone had the same idea as you? But why would that be bad? 🤷‍♂️ probably happens more than you'd think
@mashedpotatoes7068
@mashedpotatoes7068 3 ай бұрын
No offense, but by principle I hate subscriptions and finding out there is a paywall in the end!
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
It's a reward for people who support me. I show the full workflow for free in the video
@chilldowninaninstant
@chilldowninaninstant 3 ай бұрын
All of the nodes are visible and explained, there is no paywall or secret. Don't be lazy create your own workflows with what you have learned and expand upon them its up to you.
@azmodel
@azmodel 3 ай бұрын
"No offense but even though I found your work very useful, and I would definitely benefit from it, I don't see why should I recognize you any way or form"
@mashedpotatoes7068
@mashedpotatoes7068 3 ай бұрын
@@OlivioSarikas Workflow much appreciated but I still hate the system with subscriptions and paywalls! Also It's not laziness! If I had to manually create every workflow I encounter, it would be a real headache! :)
@devnull_
@devnull_ 3 ай бұрын
@@chilldowninaninstant lol yes that is super lazy, with gen AI one doesn't even have to learn to draw or paint for years, simply learn to operate a software and understand some concepts to get nice looking images. And here a youtuber spoon feeds people how to do some specific thing, and still some folks complain.
@alienrenders
@alienrenders 3 ай бұрын
AI Image generation is basically a glorified denoiser. I'm wondering if too much noise was removed in the first sampler. Would be interested to see the results if you did steps 1-10 of a max of 12 (or up to 15) for example for the first sampler. This way you have an overlap but you're still letting the second sampler not go to waste as much. The way you have it now, the second sampler is nothing more than full image inpaint with a very low denoising strength.
@titerote71
@titerote71 3 ай бұрын
With the particularity that the image on which the entire process is based has been generated in an accelerated manner in 10 steps, which increases the possibilities of alterations and malformations in the hands, eyes, anatomy. etc., which will not later be able to be corrected in the remaining refined processes.
@heckyes
@heckyes Ай бұрын
So simple. I wonder WHY it works though?
@anigroove
@anigroove 3 ай бұрын
Love it!
@starblaiz1986
@starblaiz1986 3 ай бұрын
At first I was like "wait, WHY does this work?" But then I noticed each KSampler has a different seed, and it all clicked, because by changing the seed it triggers it to do different things in each part of the image than it would have and that's what introduces the extra detail. That's actually kinda genius! 🤯 I wonder if adding noise between KSamplers would help too? 🤔 Come to that, I wonder what would happen if you had a different KSampler for every single individual step? 🤯
@MilesBellas
@MilesBellas 3 ай бұрын
"In German, the word for "windscreen wiper" is "Scheibenwischer." It's a compound word made up of "Scheibe," which means "windscreen" or "windshield," and "Wischer," which means "wiper." So, literally translated, "Scheibenwischer" means "windscreen wiper" or "windshield wiper.""
@douchymcdouche169
@douchymcdouche169 3 ай бұрын
I give this video a Mmuah! out of 10.
@mariocano7263
@mariocano7263 3 ай бұрын
Could this work with img2img?
@OlivioSarikas
@OlivioSarikas 3 ай бұрын
technically yes, but it might change the details because of the 10-step first render. but give it a try
@ricperry1
@ricperry1 3 ай бұрын
@@OlivioSarikas If you start with the 3rd stage (or something similar) maybe this can be used like Magnific?? Just a thought. I'm thinking, upscale, inject noise, and denoise from a late stage??
@SwampySi
@SwampySi 3 ай бұрын
Are you making the workflow available to non patreons at some point?
@ernstaugust6428
@ernstaugust6428 3 ай бұрын
I recreated it. It's simple.
@livinagoodlife
@livinagoodlife 3 ай бұрын
Joined your Patreon but you dont reply to questions there it seems
@thiagomucci9860
@thiagomucci9860 3 ай бұрын
@ArtiomRomanov
@ArtiomRomanov Ай бұрын
Great idea, but doesn’t work with consistent characters, unfortunately
@Sedokun
@Sedokun 3 ай бұрын
8:22 Thank our sponsors, Rionlard and Toribor
@Dron008
@Dron008 3 ай бұрын
Normal face as for witches )
@AirwolfPL
@AirwolfPL 2 ай бұрын
Unfortunatelly it doesn't work well for everything - taking just 10 steps render in the first step using regular model will frequently result in a messed up results (people with additional limbs and so on). So yeah, while this method really improves details with the same amount of steps - it breaks things a lot as well :(
@mashedpotatoes7068
@mashedpotatoes7068 3 ай бұрын
Sorry for that but you really need to check it out!
@RandyLittleStudios
@RandyLittleStudios 3 ай бұрын
0-10 in computer calculations is 11 steps. As is 10-20. 10 11 12 13 14 15 16 17 18 19 20. That's 11 numbers, so your ksampler should be set to 11. Otherwise, you never reach the final step. Unless you don't want to reach the final step. Also isn't this exactly how SDXL refiner works?
@2008spoonman
@2008spoonman 3 ай бұрын
Gonna try your theory tomorrow. Interesting!
@Shingo_707
@Shingo_707 3 ай бұрын
Still slow if you don't have a 4090, they need to make a more accessible model
@ImmacHn
@ImmacHn 3 ай бұрын
So, iterative upscaling?
@tripleheadedmonkey420
@tripleheadedmonkey420 3 ай бұрын
More like iterative resampling. Upscale is optional.
@TheGanntak
@TheGanntak 3 ай бұрын
Insane detail.
@deadlyrobot5179
@deadlyrobot5179 3 ай бұрын
Nobody can fix flux with its same face and CG issue. Back to SDXL
@KDawg5000
@KDawg5000 3 ай бұрын
If you use the "Amateur Photography" lora, it will create more normal/unique people
@archael18
@archael18 3 ай бұрын
Lol see ya
@AB-wf8ek
@AB-wf8ek 2 ай бұрын
I'm getting into Flux kind of late, but this is a super helpful trick. Getting fast previews is key. I was using turbo and doing smaller renders to test different settings, but this method is much better. I don't know if people are appreciating the algebra on the upscales: 30steps + Upscale vs 20steps + Upscale + 10steps It's the same amount of processing, but this method puts 10 generative steps after the upscale, which is the trick to better upscales in general. Thanks for sharing!
@havemoney
@havemoney 3 ай бұрын
They are promising a new SANA model soon, what do you know?
@ruslanagapasa
@ruslanagapasa 3 ай бұрын
first!!!
@halfof333
@halfof333 3 ай бұрын
Second lol
@Artazar777
@Artazar777 3 ай бұрын
Someone share the workflow, I don't want to spend money on a subscription for the sake of one file)
@EH21UTB
@EH21UTB 3 ай бұрын
just watch it and build it, not hard at all. When you do this you will learn how to make stuff yourself instead of begging for handouts.
@generichuman_
@generichuman_ 3 ай бұрын
Ugh, just watch it and build it. stop being lazy. Or spend 3 dollars... you don't have 3 dollars?
@EH21UTB
@EH21UTB 3 ай бұрын
@@the_one_and_carpool he literally shows you the work flow. By building it yourself you learn. I can't believe you need someone to give it to you, it's so simple. And you're wrong, he's asking a really fair price for his patreon most ask much more. The really wrong thing is that you expect people to give you stuff for free when you don't offer anything except complaints.
@JarppaGuru
@JarppaGuru Ай бұрын
nope. yet again new model comes
SUPER FLUX Turbo - Better, Faster, More Details!
9:49
Olivio Sarikas
Рет қаралды 21 М.
SPLASH BALLOON
00:44
Natan por Aí
Рет қаралды 27 МЛН
Tilt 'n' Shout #boardgames #настольныеигры #games #игры #настолки #настольные_игры
00:24
Кого Первым ИСКЛЮЧАТ из ШКОЛЫ !
25:03
Inside the V3 Nazi Super Gun
19:52
Blue Paw Print
Рет қаралды 3,3 МЛН
Want Better Flux Images? I've Got the Secret! Model Sampling Flux
10:43
Image to 3D to Image - THIS IS AWESOME!!!
7:33
Olivio Sarikas
Рет қаралды 80 М.
REDUX Advanced for FLUX - THIS is really GOOD!
9:04
Olivio Sarikas
Рет қаралды 30 М.
Google AI can WATCH & teach you 3D Software! 🤯
4:02
VFX Central
Рет қаралды 48 М.
Why do Studios Ignore Blender?
8:52
Film Stop
Рет қаралды 503 М.
Flux: all samplers, schedulers, guidance, shift tested!
24:33
Latent Vision
Рет қаралды 38 М.
I Wish I Knew These Ai VFX & Editing Tricks Sooner
11:20
Creator Secrets
Рет қаралды 59 М.
2 INSANE AI TOOLS: X-Portrait 2 + Detail Daemon
10:24
Olivio Sarikas
Рет қаралды 26 М.