ComfyUI Infinite Upscale - Add details as you upscale your images using the iterative upscale node

  Рет қаралды 93,174

Scott Detweiler

Scott Detweiler

10 ай бұрын

Today we will use ComfyUI to upscale stable diffusion images to any resolution we want, and even add details along the way using an iterative workflow! This is the first of 2 workflows for upscaling that I tend to use, and this is the more flexible of the two. We will be using a custom node pack called "Impact" and it has a ton of nodes in it that are useful, so get used to seeing this one! We will also use the manager node, which we installed in a previous video (note I used "git fetch" versus "git clone", so keep that in mind) I have linked that below as well. We will use these to scale up images we love while adding details and even injecting additional prompts along the way!
This video also goes into some depth on using the provider node and pipes, but of which are quite useful but have some confusing parameters like pk_hook and so on.
If you are confused, check out the SDXL graph basics here: • SDXL ComfyUI Stability...
#stablediffusion #sdxl #comfyui #img2img
Grab some of the custom nodes from civit.ai: civitai.com/tag/comfyui
Here is the manager: civitai.com/models/71980/comf...
Here is the Impact Node Suite: github.com/ltdrdata/ComfyUI-I...
Grab the SDXL model from here (OFFICIAL): (bonus LoRA also here)
huggingface.co/stabilityai/st...
The refiner is also available here (OFFICIAL):
huggingface.co/stabilityai/st...
Additional VAE (only needed if you plan to not use the built-in version)
huggingface.co/stabilityai/sd...

Пікірлер: 252
@JosephKuligowski
@JosephKuligowski 10 ай бұрын
how do you disable the use_filled_vae ? EDIT: So the reason why you have to install the BlenderNeko: Tiled Sampling for ComfyUI that's the reason behind this issue.
@sedetweiler
@sedetweiler 10 ай бұрын
Oh, good find! I will pin this comment.
@kaziahmed
@kaziahmed 10 ай бұрын
@@sedetweiler Where do I put the BlenderNeko: Tiled Sampling in my ComfyUI directory?
@sedetweiler
@sedetweiler 10 ай бұрын
All of those go under custom noise. You can use the Manager to install it and that makes life easier.
@kaziahmed
@kaziahmed 10 ай бұрын
Thank you! @@sedetweiler
@kaziahmed
@kaziahmed 10 ай бұрын
Btw, great video! Thank you for such an informative tutorial. @@sedetweiler
@TailspinMedia
@TailspinMedia 5 ай бұрын
this is awesome, i love that you walk through the workflow nodes to explain what is happening.
@AbstraktKardman
@AbstraktKardman 8 ай бұрын
Great tutorial! Thanks for taking the time to clear these things up. I have to mention that I happened to be watching your tutorial right before my daily workout routine, which added a whole new unexpected layer of entertainment when mixing academia with athleticism . Thank you again for sharing your knowledge!
@sedetweiler
@sedetweiler 8 ай бұрын
Great to hear! I have a few more coming that will be mind blowing as well.
@LatentLiminality
@LatentLiminality 10 ай бұрын
Thanks for these helpful tutorials Scott. Comfy isn't the main UI I use but you've made it so much more usable, and definitely something I'm keen to use more!
@sedetweiler
@sedetweiler 10 ай бұрын
Great to hear! I also use other UIs, but this is the one I use when I want to test out some wild idea.
@AIMusicExperiment
@AIMusicExperiment 9 ай бұрын
You are a hero! Everytime I watch one of your videos, I learn stuff that I would have never guessed. Impact pack is a huge Go To for me, I also LOVE the efficiency nodes.
@sedetweiler
@sedetweiler 7 ай бұрын
Thanks for watching!
@tsutsen1412
@tsutsen1412 10 ай бұрын
The best videos on Comfy! Love it, thank you very much!
@sedetweiler
@sedetweiler 10 ай бұрын
Glad you like them!
@margotpaon
@margotpaon 7 ай бұрын
Amazing tutorial Scott. Thank you very much! I'm learning about stable diffusion and ComfyUI and this class helped me a lot with upscalers and I hope everyone realized that in addition to adding the purple hair we can remove some detail with a negative prompt
@sedetweiler
@sedetweiler 7 ай бұрын
Glad you enjoyed it!
@Sim00n
@Sim00n 9 ай бұрын
You are SIMPLY THE BEST !!! fluent, effortless, snappy, concise, to the point, crystal clear,... you name it, man you are a Godsend !!!!! ⭐🌟 and love the recap at the end of the video, excellent !!!🤩🌟
@sedetweiler
@sedetweiler 9 ай бұрын
Wow, thank you! Glad you enjoyed it!
@justinwhite2725
@justinwhite2725 9 ай бұрын
I like that the pipe takes inputs rather than just loads the model. I've gotten great results using a different clip from a different model.
@piersyfy4148
@piersyfy4148 7 ай бұрын
The best ComfyUI tutorial I've come across. Thank you so much mate!
@sedetweiler
@sedetweiler 7 ай бұрын
Glad it helped!
@AIAngelGallery
@AIAngelGallery 10 ай бұрын
Just wow! Thx for introducing this cool nodes!
@sedetweiler
@sedetweiler 10 ай бұрын
Glad you like them!
@CyberthonTV
@CyberthonTV 8 ай бұрын
I really like how you step through your tutes, step by step and clear as a bell!
@sedetweiler
@sedetweiler 8 ай бұрын
Thank you!
@nepobedivititanik
@nepobedivititanik 7 ай бұрын
​@@sedetweilercan you please upload workflow file?
@musicandhappinessbyjo795
@musicandhappinessbyjo795 10 ай бұрын
This was just so awesome video. This really shows the power of ComfyUI. PLease bring in more videos like these. There very rare videos like these on youtube. Only few people actually are uploading ComfyUI videos.
@sedetweiler
@sedetweiler 10 ай бұрын
I will keep them coming!
@Marcus-si7su
@Marcus-si7su 5 ай бұрын
Very nice and slow, showing everything how it fits together, really liked watching this.
@sedetweiler
@sedetweiler 5 ай бұрын
Awesome, thank you!
@reapicus557
@reapicus557 2 ай бұрын
Excellent video! I need to get in the habit of using the pipes more often. Also, I had no clue about the iterative upscalers, nor have I really been able to figure out hooks before now. This has helped me a bunch. :)
@hakandurgut
@hakandurgut 10 ай бұрын
This channel is the only one i have all notifications on. Also the only channel i dont fast forward :) i enjoy all moments of the videos
@sedetweiler
@sedetweiler 10 ай бұрын
Wow, thanks! You made my day! Cheers!
@hakandurgut
@hakandurgut 10 ай бұрын
Hope you get to find more time for more videos.
@sedetweiler
@sedetweiler 10 ай бұрын
Yup! More are on the way soon!
@Padybu
@Padybu 10 ай бұрын
Just what I needed, Thank you!
@sedetweiler
@sedetweiler 10 ай бұрын
Glad it helped!
@brandonflores4
@brandonflores4 5 ай бұрын
things certainly escalated this video. thank you so much, could not understand without you.
@AILifeHacks
@AILifeHacks 10 ай бұрын
great video - very concise explanation and easy to follow
@Puckerization
@Puckerization 10 ай бұрын
Excellent tutorial Scott, thank you. I've had the Impact Nodes installed for a week or so but it's really hard to find tutorials on their various functions. I've learned a lot from this video. Please add more Impact Nodes tutorials when you get the chance.
@sedetweiler
@sedetweiler 10 ай бұрын
Thank you! Yes, there are going to be a lot more coming as I think this is a wonderful pack of custom nodes.
@DurzoBlunts
@DurzoBlunts 10 ай бұрын
Impact pack creator has a KZbin channel they're uploading examples and eventually tutorials to. Channel is 'Dr Lt Data' I believe.
@Puckerization
@Puckerization 10 ай бұрын
@@DurzoBlunts Yes, I've seen them. They are all silent movies of someone who knows what they are doing but can't communicate it very well to the rest of us.
@sedetweiler
@sedetweiler 10 ай бұрын
Yes, I found him a few days ago and it helped with some of the new stuff. I have been using it for a few weeks now, but hopefully my videos will get his pack more noticed.
@nerdaxic
@nerdaxic 2 ай бұрын
Thank you, this was super helpful tutorial ✌🏻
@mikerhinos
@mikerhinos 10 ай бұрын
Didn't know this technique thanks ! It's changing quite a lot the base image though compared to traditional tiled upscale.
@sedetweiler
@sedetweiler 10 ай бұрын
I did have my noise pretty high, so I could have controlled that. However, I always like to see what details it adds, so I sort of enjoy this process of exploration.
@hippotizer
@hippotizer 4 ай бұрын
extremely useful things to learn from this video!
@sedetweiler
@sedetweiler 4 ай бұрын
Glad to hear that!
@user-vn8wr9rr2x
@user-vn8wr9rr2x 6 ай бұрын
Fantastic demo, thank you!
@sedetweiler
@sedetweiler 6 ай бұрын
Glad you liked it!
@Enricii
@Enricii 10 ай бұрын
Thanks for sharing, there is a huge need to explain custom nodes! Sometimes they don't even have the "automatic" input node to choose from, so it's quite difficult to understand their usage (not speaking about impact pack here). Regarding the topic of the video, I've been experimenting with different upscale methods and nodes, this one included. The outcome, in my opinion, is that Ultimate SD upscale with controlnet tile is the best method (like it is in A1111) :D
@sedetweiler
@sedetweiler 10 ай бұрын
Yup! I agree, and that video is probably next. However, I wanted to cover some of the concepts in here that might be useful when dealing with the node driven process. It isn't the best upscaler by far, but it does give us another tool in our pocket. Cheers!
@DurzoBlunts
@DurzoBlunts 10 ай бұрын
For those with low vRAM, this node eats up vRAM! A useable alternative is the Ultimate SD Upscaler custom node. It's not as vRAM hungry. This iterative limits me to about 7 steps and 1.5x upscale. Whereas I can do 2.25 or even 2.5x with SDUltimate Upscaler.
@sedetweiler
@sedetweiler 10 ай бұрын
That is actually the other method I use for upscaling, but I wanted to cover this one as well since there are other strategies I show in here that are not exactly related to the upscaler but are helpful to know overall. Cheers! That video is also coming soon!
@DurzoBlunts
@DurzoBlunts 10 ай бұрын
​@@sedetweilerI completely agree with your documentation of everything and making sure the viewer knows how it works. Doing a great job for the new comers to SD node based gen.
@sedetweiler
@sedetweiler 10 ай бұрын
Thank you!
@ferniclestix
@ferniclestix 10 ай бұрын
Great tutorial, thanks!
@sedetweiler
@sedetweiler 10 ай бұрын
Glad you enjoyed it!
@ChielScape
@ChielScape 10 ай бұрын
Darn young'uns 'n' their wayfoo models
@sedetweiler
@sedetweiler 10 ай бұрын
Kids these days. Geez. ;-)
@taakefyrste
@taakefyrste 7 ай бұрын
I appreciate these comfy go threw videos highly Scott. Great content! I wonder if midjourney engine (model?) will be accessible in stable diffusion! I find that better than dsxl for time beeing! Keep on the great work!
@sedetweiler
@sedetweiler 7 ай бұрын
No, unfortunately there business model means they cannot release their model like stability.ai can. However, there are some models that are close, and you can also use midjourney images you have downloaded in your pipelines to do alterations.
@cclarkk
@cclarkk 9 ай бұрын
Thanks for these fantastic videos! They've been incredibly helpful. Can I use this workflow/ComfyUI to generate videos too ? Be it from a single frame of an existing video or from the latent noise.
@Nyarlatha
@Nyarlatha 8 ай бұрын
thank you so much! you guys are amazing.
@sedetweiler
@sedetweiler 8 ай бұрын
My pleasure!
@sunshineo23
@sunshineo23 Ай бұрын
I'm just shocked after you correct the starting denoise to 0.3, the change to the image is almost like edit the image by prompt. This is going to change the world for a lot of people
@PaulFidika
@PaulFidika 9 ай бұрын
Holy shit this man is a master of ComfyUI. I feel like 'master of ComfyUI' could be a full college course.
@sedetweiler
@sedetweiler 9 ай бұрын
Hehe, thank you, sir!
@rolarocka
@rolarocka 10 ай бұрын
Wow I'ma try this soon 🎉😍, thx 🙏
@sedetweiler
@sedetweiler 10 ай бұрын
Hope you enjoy
@luiswebdev8292
@luiswebdev8292 6 ай бұрын
great tutorial!!
@sedetweiler
@sedetweiler 6 ай бұрын
Thank you!
@FusionDeveloper
@FusionDeveloper 6 ай бұрын
That's cool, I didn't know you could do stuff like this to have it choose a random one: {sunrise|sunset|raining|morning|night|foggy|snowing}
@sedetweiler
@sedetweiler 6 ай бұрын
Yup! Lots of other prompt tricks in there as well.
@EranMahalu
@EranMahalu 5 ай бұрын
amazing tutorial, thanks! question - do you think it can work off of an input image rather than a prompt?
@adrient104
@adrient104 5 ай бұрын
"Pretty simple graph"… I’m like 😵🍝
@ChielScape
@ChielScape 10 ай бұрын
I've been using an iterative upscale method where I basically did what A1111 does with img2img and getting good results. Rather than upscaling the latent, I upscale the image and then re-endcode to latent between every step. As you mentioned, upscaling latent images does weird things. The first x2 upscale step uses 0.40 denoise, while using 0.20 for the second. Impact nodes do seem useful, i've been looking for a way to concat prompts.
@sedetweiler
@sedetweiler 10 ай бұрын
I think there are so many ways to bend this, and I love that you are coming at it from another angle but finding some of the same things. Keep the ideas coming!
@nuppanuppa
@nuppanuppa 8 ай бұрын
how you do it with every step?
@michaelbayes802
@michaelbayes802 10 ай бұрын
Hi Scott, thanks for your great videos! keep 'em coming. One question though what is the main advantage of using this upscale process? Is it quality or is it quicker - not sure I understood after watching the video why I should use this. thanks
@sedetweiler
@sedetweiler 10 ай бұрын
This was just an example of an upscaler workflow, and there are many. I did this one first to show some of the more interesting aspects you can use, like late prompt injection, provider nodes, and other little aspects. It probably isn't the best upscaler, but much better than the default node. Another one is coming soon that is my favorite one but isn't anywhere near as interesting to setup.
@abdelhakkhalil7684
@abdelhakkhalil7684 Ай бұрын
Nice Workflow. So, basically, this is the HiRes Fix in Automatic1111, but more advanced and customizable.
@JamesPound
@JamesPound 10 ай бұрын
Thanks for sharing your process. It's great to see innovation. I'm not sure ComfyUI is the best choice for this workflow though, each step is getting less detailed overall. It might be better to have more control of what happens between each step (ala Auto1111).
@sedetweiler
@sedetweiler 10 ай бұрын
You can do that on here as well, I just wanted to show one method. Another method video is coming soon that you might prefer, or even mix them together!
@samwalker4442
@samwalker4442 2 ай бұрын
It does make me laugh when your OCD is triggered....you set mine off as well!!
@RiiahTV
@RiiahTV 9 ай бұрын
thankyou man! so much
@sedetweiler
@sedetweiler 9 ай бұрын
No problem!
@Shirakawa2007
@Shirakawa2007 10 ай бұрын
Great video! I'm slowly learning ComfyUI, coming from Automatic1111 (for the easy use of SDXL with my 6gb gpu). One thing I'd like to ask is, what would be the method equivalent to the upscaling you can get in the "extras" tab in Automatic1111? Because whenever I try to upscale to something bigger than 2048x2048 I got vram issues (when in A1111 I can go to x4 that value in Extras tab). Any help will be appreciated!
@sedetweiler
@sedetweiler 10 ай бұрын
Yes, there are methods for that and I will be covering another upscale method soon.
@Lorentz_Factor
@Lorentz_Factor 5 ай бұрын
So with some models with SDXL. I had this working pretty well, however other models. It seems to fade to a gray and hazy look. Do you have any idea why this might be happening? I've tried to Justin to CFG but that doesn't seem to have much effect but to be faded and fried Also, did you mean to title the video iterative or infinite?
@user-qv1in8hq5f
@user-qv1in8hq5f 4 ай бұрын
Hello, thx so much for your tutorial. I'm just wondering if i can add a 4th upscaler, or even a 5th ? But i can't figure out how to get further the 3th. Do you have any tips please ? Thx again scott
@ThedjAwesome
@ThedjAwesome 5 ай бұрын
Your videos are really helpful. Thanks for making them. After, I ran the process I received a 3rd image without purple hair. The upscaler, 4x_NMKD-Superscale-SP_178000_G.pth, I input to PixelKSampleUpscalerProvider gave me a result without purple hair and is blurry. I may try to track down the upscaler you used. Anyways, how do I re-run the whole process? If I click, Queue Prompt in an attempt to redo the whole process, it does nothing.
@justinwhite2725
@justinwhite2725 9 ай бұрын
Average is pretty straight forward (literally just takes the matrix and averages all the numbers), but what is the difference between concat and combine? And how do they interact with control nets before them? I've gotten strange results when using either depending on which connection I add things to. I have yet to see any documentation that really clarifies the difference. My understanding is that combine is basically the :: operator in midjourney, which makes me wonder what concat does. It can't be adding words to the end of the prompt because it's post-encode. It probably appends the matrix to the end of the previous one but what does that actually do in terms of how it's processed?
@TR-707
@TR-707 6 ай бұрын
after i hooked up my own upscale models WHEWWWW this is insane
@sedetweiler
@sedetweiler 6 ай бұрын
Woot!
@TR-707
@TR-707 6 ай бұрын
@@sedetweiler it's very funky to edit the image with sharpening and higher contrast to crisp it up before the upscaling that usually blands them out
@johnmcaleer6917
@johnmcaleer6917 10 ай бұрын
I've downloaded some 'monster workflows' from some very clever users but can't see much value in them compared to your lovely simple workflows....I'm not sure Comfy needs to be complicated as much as some graphs make it...your vids are so accessible, keep em coming...nice simple inpainting one would be good if you are in need of suggestions...😉
@sedetweiler
@sedetweiler 10 ай бұрын
Glad you like them! Inpainting is coming soon! I am actually doing that live on Discord today at the official Stability.ai Thursday broadcast.
@user-ot6mg1tu3e
@user-ot6mg1tu3e 8 ай бұрын
Very well explain, I like it. Question: Do you think it's possible to preserve the pink t-shirt when you change hair color. I wonder I there is way to preserve element color (I try cutoff but the result wasn't perfect).
@sedetweiler
@sedetweiler 8 ай бұрын
There are a lot of ways, but the graph would get complicated. However, I think we are going there soon as a lot of the basics are covered now.
@shallowandpedantic2320
@shallowandpedantic2320 10 ай бұрын
Thank you 👍
@sedetweiler
@sedetweiler 10 ай бұрын
You are welcome
@RickHenderson
@RickHenderson 6 ай бұрын
Great work Scott. Do you have a workflow for taking just an image that's already generated and then upscaling it? Thanks.
@sedetweiler
@sedetweiler 6 ай бұрын
Yes, I have done that often (even in the live stream today). I am not sure I have a video on that specifically, but I do it all the time.
@andresz1606
@andresz1606 6 ай бұрын
Could you explain why you have set such a high CFG in the HookProvider and a low CFG in the UpscalerProvider? The default values are the other way round. I can't believe how those values worked fine in your case because they failed miserably in my workflow.
@uk3dcom
@uk3dcom 10 ай бұрын
So many nuggets here. 🙂
@sedetweiler
@sedetweiler 10 ай бұрын
Thank you!
@PatrickIsbendjian
@PatrickIsbendjian 8 ай бұрын
@sedetweiler Thanks for a great tutorial! I tried your workflow step by step and had no issues. However, I found that the result's quality with the latent upscale was not up to my expectations with some jaggy lines. I decided to experiment a bit and tried to use an Iterative Upscale (Image). It turns out that it needs the same Provider and basically produces the same results as the other Upscaler. It seems that the only difference is that it take the VAE as input and outputs an Image thus saving the VAEDecode node. Now the interesting part: if I plug the 4x_UltraSharp into the upscaler_model input, I get much better results (but it slows down the generation) As far as I know, the model is supposed to work only on images not on latents, yet everything goes smoothly whether the Iterative Upscale is Latent or Image. It seems that the Provider does Decode/Encode as necessary. Am I correct or am I missing something?
@--signald
@--signald 10 ай бұрын
Hey Scott, another good tutorial. I did a test at the mid-point of this tut and my first upscale from 704 x 448 to 1880 x 1200 (close enough to 1920 x 1080 to work with) took 19 minutes! Apples to oranges, but using Controlnet Tile in A1111 took 2.5 minutes. I'm working on a series of Deforum animations that mean upscaling of over 40,000 frames. I turned to this tut in the hope that Comfy would come to the 2.5 min rescue. Any chance you've got a trick in your pocket for us animators? Because this won't cut it. (Oh, and I haven't seen a Load Batch node. Is there one?)
@sedetweiler
@sedetweiler 10 ай бұрын
I am sure once we have controlnet we can get the times closer together.
@ysy69
@ysy69 7 ай бұрын
Hi Scott, would you say that the iterative scaling possible by ComfyUI is now part of "best practices" for upscaling (SD1.5 and SDXL) ?
@sedetweiler
@sedetweiler 7 ай бұрын
I sure think so. It takes any image and adds those details everyone seems to want.
@user-yu4ix4qs9q
@user-yu4ix4qs9q 9 ай бұрын
Thanks for the clear explanations. I tried to insert a controlnet (the last canny SDXL 256) into the conditioning pipeline, but image generation fails after the first sampler. It seems that this workflow is not compatible with Controlnet. Is there a solution to avoid this?
@sedetweiler
@sedetweiler 9 ай бұрын
Hmmm, it should work. I will have to give it a try.
@patagonia4kvideodrone91
@patagonia4kvideodrone91 9 ай бұрын
It would be very useful if you can share any of those images with us, so we can obtain the blue print, to simplify putting it into practice, with the manager, which allows us to install the missing nodes, it is much simpler, the video is very good, it looks very the process is clear, I had been testing several upscalers, x4 x8 that I was able to generate photos of up to 16000x16000 of 500mb each, the good thing about this technique is that better details can be applied as it is enlarged,
@godorox
@godorox Ай бұрын
Can I use '4x-UltraSharp' instead SwinIR Upscaler model?
@MrMorvar
@MrMorvar 4 ай бұрын
Is this workflow still valid? Compared to Automatic111's img2img tab upscaling with just eular a and denoising of 0.2-0.3, I'm clearly losing detailed lines when working on anime pictures. Even if I'd go up really slowly and 5 steps
@TransformXRED
@TransformXRED 10 ай бұрын
Hey Scott... Do you have a video about the best practices to manage all the workflows config files? Because I always find myself having a proper node workflow, but I test so many new things that it's getting messy, then I get 10 versions of something and I always start over at the end lol. I'm sure there are ways to stay organized.
@sedetweiler
@sedetweiler 10 ай бұрын
I tend to keep my favorite ones in a folder on my desktop. I also put the one from today in the Posts area of the channel for Sponsors, and I would probably rename that one to something like "Upscaler Base" and remove a few of the testing nodes. I do have another video coming soon that might actually help you a ton in this area, so perhaps I will push that one up near the top. Cheers!
@TransformXRED
@TransformXRED 10 ай бұрын
@@sedetweiler Thanks four your reply! I just recreated what you did in the video, I tested with the upscale model "Siax", well, It's pretty interesting, since the upscale model is super sharp, the 3x added some type of grain into the final image :D Thanks for these videos btw, they are great.
@sedetweiler
@sedetweiler 10 ай бұрын
I would keep playing with the noise, sampler, scheduler, and all that until you get something you love. It can change a ton by just tweaking values.
@Potts2k8
@Potts2k8 2 күн бұрын
Sorry, am noob, but is there a way to utilize this for the option of upscaling already existing images? Everything I've tried either gives me errors or takes ages with no change to the original image - hell, even makes it worse most times.
@ownimage
@ownimage 9 ай бұрын
Thanks for these, just what I was looking for ... could you share the json for the final flow?
@sedetweiler
@sedetweiler 9 ай бұрын
It is in the posts for the page and is visible for channel sponsors.
@pn4960
@pn4960 10 ай бұрын
I can use SDXL with my 6GB graphics card in comfyUI ! isn’t it amazing ?
@sedetweiler
@sedetweiler 10 ай бұрын
I have a 4gb laptop that can also run it... Slow for sure, but the fact it works is pretty amazing! Cheers!
@CBDuRietz
@CBDuRietz 7 ай бұрын
Pretty new to ComfyUI and working through the tutorials right now. One question I have is: In the "KSampler (pipe)", is the VAE output channel the same VAE that also is passed out in the BASIC_PIPE output channel, or is it a VAE modified by the KSampler Node?
@sedetweiler
@sedetweiler 7 ай бұрын
It's the same all the way across the grid. Typically we don't mess with the VAE, but sometimes we will use another one, but we would probably specify that and it would be obvious. No steps should be hidden.
@CBDuRietz
@CBDuRietz 7 ай бұрын
Thanks. I kind of suspected that, but was a little confused by the uppercase/lowercase naming convention, and trying to understand it. Do you know the rationale? The input side is mostly in lowercase, while the output side a little more mixed, usually uppercase but sometimes lowercase. Perhaps I'm just overthinking it, being a software developer by trade. 🙂
@craiggrella
@craiggrella 4 ай бұрын
how do you show the steps through the upscaler? Is that a setting in manager or something else?
@sedetweiler
@sedetweiler 4 ай бұрын
yes, you can enable TAESD slow previews and they will show up.
@erdbeerbus
@erdbeerbus 9 ай бұрын
Crazy Is it possible to load a stack of images like a task into a comfy UI workflow to change a sequence of images in this way? Thx in advance
@___x__x_r___xa__x_____f______
@___x__x_r___xa__x_____f______ 7 ай бұрын
Hi Scott, I've got a use case question. I need to upscale an image that I generated on SDXL using my own trained lora for a highly detailed photorealistic portrait of a person. Now, I am noticing that the skin, which is what I want mostly latent generated, is very good in my LoRA. Is there a way to inject those LoRA weights through the model in the UpscalerProvider Pipe? Only problem is I generated the image in auto1111, so the weight interpreation is a little different. But in principle, would you think there is a workflow to enhance via Iterative Upscaling and piping a lora into it, maybe using weight blocks? Any thoughts? Would love to get this right.
@sedetweiler
@sedetweiler 7 ай бұрын
I might have to give you an example for this, but you can easily do it. Don't get into the mindset that you can only use one checkpoint. You can always load others and use them in different places in the workflows as long as they are compatible.
@___x__x_r___xa__x_____f______
@___x__x_r___xa__x_____f______ 7 ай бұрын
Ok will pursue this further. I managed to finish my job, but with very high resolution but not able to easily control that Lora skin I wanted. I get stuck testing and seeing anything really effective coming from it. Anyway, small use case, nothing to make big fuss about. Thanks
@uk3dcom
@uk3dcom 10 ай бұрын
Hi Scott, I'm following along with your tutorial but the PixelKSamplerUpscalerProviderPipe node is asking for a use_tiled_vae? This doesn't show on your version of the node? What to do?
@sedetweiler
@sedetweiler 10 ай бұрын
See the pinned comment. You are probably missing a component. Cheers!
@random11
@random11 6 ай бұрын
is there a similar workflow for Automatic1111?
@carlajimenez1482
@carlajimenez1482 10 ай бұрын
How can I do multiple prompts running at the same time like the prompt matrix in automatic1111?
@sedetweiler
@sedetweiler 10 ай бұрын
You can add any number of connections to the model and it will just complete them. It does the same thing, if I understand your requirement.
@dominikstolfa4579
@dominikstolfa4579 Ай бұрын
I would like to use this method to add details to already existing non-ai picture. Is it possible?
@adisatrio3871
@adisatrio3871 7 ай бұрын
How to deal with that color bleeding? That purple is not only for the hair, but also to a lot of other things.
@Artem-ch5bh
@Artem-ch5bh 6 ай бұрын
For me under use_tiled_vae there is tile_size i don't what to put in therefore my result or the second image is completely zoomed in and you can't see the actual character. can someone help?
@GlassHexagonalColumbus
@GlassHexagonalColumbus 9 ай бұрын
Whenever I'm pasting with Shift key it actually doubles pasted object (. Edited - checked on my second device with different OS - same problem
@monkeymediapl
@monkeymediapl 6 ай бұрын
Hi. Awesome tutorial but in my case something goes wrong. If in PixelKSampleUpscalerProviderPipe i put denoise under 1.0, eg. like you 0.3, i got low quality output. When left it at 1.0 everything is super crisp, but lot different than first generated image. Do you have any clue how to make this work?
@sedetweiler
@sedetweiler 6 ай бұрын
You sure it was the denoise and not another setting? I did that in the video and caught myself later.
@monkeymediapl
@monkeymediapl 6 ай бұрын
@@sedetweiler i'm afraid that is denoise...
@greypsyche5255
@greypsyche5255 2 ай бұрын
You can use a pixel upscal model in latent space? How is that possible?
@hdkr4ik
@hdkr4ik 3 ай бұрын
Could you give the .json settings for this case?
@leonardhinkelmann5629
@leonardhinkelmann5629 6 ай бұрын
I tried this but for some reason the pixelksampleupscalerproviderpipe has a tile size even when use_tiled_vae is disabled and returns nothing useful. Did they make a mistake with an update of the custom node or what am i missing?
@sedetweiler
@sedetweiler 6 ай бұрын
This is a bit dated, so things might have changed. All of these nodes get updated several times a days, so be 100% sure both comfy and all of the custom nodes are updated.
@StargateMax
@StargateMax 6 ай бұрын
Is it outdated by now? Because the workflow setup does not work. I installed all the required stuff, but it keeps throwing me this error: When loading the graph, the following node types were not found: MilehighStyler And there doesn't seem to be any fix to this as of 12/9/2023.
@ThedjAwesome
@ThedjAwesome 5 ай бұрын
Worked fine for me today.
@chrisfreilich
@chrisfreilich 10 ай бұрын
Great, if a bit overwhelming, tutorial! One thing is different for me, in than the PixelKSampleUpscalerProviderPipe has an input called 'use_tiled_vae' that's required in order to work. I couldn't find a simple BOOLEAN node so I had to kludge together a few other nodes to create a FALSE for that input. Any idea why the difference, and maybe an easy way to input a BOOL?
@sedetweiler
@sedetweiler 10 ай бұрын
I think you might have the wrong node, as some are quite similar in name.
@drltdata
@drltdata 10 ай бұрын
Update your ComfyUI to latest version.
@sedetweiler
@sedetweiler 10 ай бұрын
You might also use the manager and install BlenderNeko: Tiled Sampling
@dcpuzzles2990
@dcpuzzles2990 5 ай бұрын
I know it's late, but may be useful if someone else has the same issue - if you right click on the node, you should get the option to convert any input to a widget, which puts it into the properties list. In this case it would put the input as a switch that is disabled by default, but you can enable it in the properties section
@user-mp2rz5ro1e
@user-mp2rz5ro1e 7 ай бұрын
how do you add in custom models if running comfy through Pinokio?
@sedetweiler
@sedetweiler 7 ай бұрын
I have not used Pinokio at any depth to be able to help you here. Sorry.
@kryless7775
@kryless7775 9 ай бұрын
Does not work for me , even with blendeneki, there is this "use_tiled_vae" option and i don't know what to do with it...
@artplaenan445
@artplaenan445 6 ай бұрын
Hello and thx for this share ! For some reason, at each Iterative Upscale node, my generation becomes brighter and brighter. Do you have an idea, pleaz ?
@explorer945
@explorer945 10 ай бұрын
Awesome video. Can you do a video where we can do image to image masking and inpainting with just prompts and nodes (no manual masking). Is that possible? Similar to stability AI API
@sedetweiler
@sedetweiler 10 ай бұрын
Yes I can. I like workflows that don't tend to make assumptions on locations of things.
@explorer945
@explorer945 10 ай бұрын
@@sedetweiler can you do a video on it plz🙏
@Skettalee
@Skettalee 7 ай бұрын
I was hoping you would get to the point where you change the starting image to an upload or whatever you call it dialog. Like i have my own pictures I took as a kid and would love to see how it would try to upscale it. Could you show us how to then add you own image at the beginning to upscale
@sedetweiler
@sedetweiler 7 ай бұрын
Yes, I can do this in a video. I have done things like that in live-streams but not in an official video yet.
@Skettalee
@Skettalee 7 ай бұрын
@@sedetweiler i would love to see it and learn it. comfyui is still so confusing to me, i feel like im just learning it with trail and error an a little serch and find. But the thing is with all the fresh technology through generative ai it changes so fast and the tutorials im finding are out of date for some of them, ill subscribe and hope to see you live soon! you know what imma ask you !
@jamesclow108
@jamesclow108 9 ай бұрын
Darn, my PixelKSampleUpscalerProviderPipe it has another pin called use_tiled_vae above the 'basic pipe' pin. not sure where I went wrong there? Anyone know where I should plug this into? Just seen that pinned comment about Blend Neko, will give that a go. Updated comfy, updated Impact, restarted comfy, removed node, added node, same issue. hmm I found the issue to be the sdxl vae that I had fed in at the beginning. I just connected the vae from load checkpoint instewad and problem gone!
@sedetweiler
@sedetweiler 9 ай бұрын
See the pinned comment. There is a component it needed but it wasn't documented.
@nirsarkar
@nirsarkar 3 ай бұрын
You would have been a great Prof. Scott. Thank god you are not! :) Thanks for this series.
@DealingWithAB
@DealingWithAB 9 ай бұрын
it seems way too many steps for something that should be simple. Anyway to use img2img like in 1111 to make this easier/faster? I've stayed away from comfyui since to me it complicates everything instead of making it easier on the person.
@sedetweiler
@sedetweiler 9 ай бұрын
The goal of comfy is to really let you modify and understand the process. It won't be for everyone. Some people just want to drive a car, while others like to get in there and understand how it works and change it to perhaps make something better. It's not going to be easier, but it will actually teach you how it works. So, if you want to understand the process, stick with it. But, if you just want to make pretty pictures fast, this probably isn't going to be your thing. Either one is a good choice.
@GuitarWithMe100
@GuitarWithMe100 10 ай бұрын
On my PixelKsampleUscalerProviderPipe there is a boolean option use_tiled_vae, how do i check this?
@sedetweiler
@sedetweiler 10 ай бұрын
Just click it and it will enable.
@drltdata
@drltdata 10 ай бұрын
Update your ComfyUI to latest version.
@sedetweiler
@sedetweiler 10 ай бұрын
See the pinned comment. You are probably missing the tiling node like I was.
@DrysimpleTon995
@DrysimpleTon995 9 ай бұрын
MY brain is exploding
@vilainm99
@vilainm99 9 ай бұрын
A bit late to the show, but.... Impact nodes do seem to install, but when I do Add Node, it is not in the dropdown list after several restarts of ComfyUI (I succesfully installed the custom nodes from the other video tutorials.... Anybody any idea what's going on? How can I debug the installation?
@vilainm99
@vilainm99 9 ай бұрын
This happens via the Manager and via git clone....
@PatrickIsbendjian
@PatrickIsbendjian 8 ай бұрын
I suggest you look at what is displayed in the console when ComfyUI starts up. It displays a message for each of the custom_nodes packages and will certainly throw some error message if something is wrong.
@8klofi
@8klofi 7 ай бұрын
It would be a great help if you could provide links to the models, as I think many of us here, try to duplicate what you have, and at least for me, it's a bit difficult to see the model name in the nodes, as it's quite small.
@sedetweiler
@sedetweiler 7 ай бұрын
This should work with any model, that part really isn't that important.
@brandonyork9924
@brandonyork9924 5 ай бұрын
Why is comfyui so slow on my computer?
@Arewethereyet69
@Arewethereyet69 10 ай бұрын
Is it just me or SDXL worse at hands? Anyway to solve this? Would you mind doing a video.
@sedetweiler
@sedetweiler 10 ай бұрын
They are better by far than 1.5, but still not perfect. However, they are always getting better!
@blacktilebluewall
@blacktilebluewall 3 ай бұрын
hey! can you provide me the soapmix model, if you have it yet. Can't find anywhere in Civitai
@sedetweiler
@sedetweiler 2 ай бұрын
It isn't that great. I don't even have it any longer. Sorry.
@teodosiytanev5762
@teodosiytanev5762 10 ай бұрын
Nice tutorial, but how do I upscale a pre-existing image that isn't already ai generated?
@sedetweiler
@sedetweiler 10 ай бұрын
Just use the image loader and VAEencode it to a latent and keep the workflow the same. There is nothing special about the AI image compared to any other. Just getting it into the workflow using the loader is the only extra step. Cheers!
@teodosiytanev5762
@teodosiytanev5762 9 ай бұрын
@@sedetweiler thanks but, im getting "Error occurred when executing PixelTiledKSampleUpscalerProviderPipe: object of type 'NoneType' has no len()" reguardless of which upsclaler provider im using
Шокирующая Речь Выпускника 😳📽️@CarrolltonTexas
00:43
Глеб Рандалайнен
Рет қаралды 11 МЛН
Sigma Girl Education #sigma #viral #comedy
00:16
CRAZY GREAPA
Рет қаралды 98 МЛН
КАКОЙ ВАШ ЛЮБИМЫЙ ЦВЕТ?😍 #game #shorts
00:17
Poopigirl
Рет қаралды 10 МЛН
Who’s more flexible:💖 or 💚? @milanaroller
00:14
Diana Belitskay
Рет қаралды 17 МЛН
Unreal 5 Secrets Every Filmmaker Must Know
15:04
Josh Toonen
Рет қаралды 191 М.
ComfyUI Batch Upscale and Face Swap Workflow! 3 of 3
3:59
Grafting Rayman
Рет қаралды 1 М.
L3: Latent Upscaling in ComfyUI  - Comfy Academy
9:24
Olivio Sarikas
Рет қаралды 37 М.
The FASTEST Cycles Renders you can get in Blender!
17:03
Kaizen
Рет қаралды 174 М.
How to Fix Bad Faces Within ComfyUI: ADetailer Alternative
2:53
Prompting Pixels
Рет қаралды 6 М.
DINOSAURS ATTACKED AT THE POOL #shorts #netflixpartner
1:01
The McCartys
Рет қаралды 15 МЛН
CARGO SKATES CHALLENGE! 😱 #shorts *WILL JUSTIN WIN?! 😂*
0:21
Каха инструкция по шашлыку
1:00
К-Media
Рет қаралды 3,2 МЛН
Азат - ол менің бизснесім  І АСАУ І 6 серия
28:42