The Corridor guys used the i2i Alternative test to make their first RPS animation. It's a really powerful tool. Glad it made it to comfy in a better implementation.
@mich_elle_x11 ай бұрын
They used EbSynth as well.
@wikidude11 ай бұрын
@@mich_elle_x not for the first one iirc. They did use davinci deflicker though
@AlistairKarim11 ай бұрын
Dude, you're awesome. Somehow, I learn about all the most exciting stuff from your channel first.
@NerdyRodent11 ай бұрын
Glad you enjoy the stuffs 😀
@ttul10 ай бұрын
Give the Iterative Mixing Sampler a go too. It’s a more faithful unsampler, using the actual LDM algorithm to generate the noised sequence (see the Batch Unsampler node).
@swannschilling47411 ай бұрын
Sweet!! I completely forgot that there had been something like that already a while ago...this one seems to be a lot more powerful! 😊
@HasanAslan11 ай бұрын
worked with lcm with 4 steps , 8 steps in the final one. good stuff.
@NerdyRodent11 ай бұрын
Nice! Was going to test that too 😀
@sneedtube11 ай бұрын
Another masterpiece dropped boys 💥
@Copperpot510 ай бұрын
As shit as I am w/ Comfy, and as resistant I've been to using it, your vids are the only ones I actively go looking for if I need to figure something out with it. Sooo even though I'll likely not find myself comfy w/ comfy I figure you earned my $5 a month (+ing you patreon after commenting).....you're always positive in the comments and break these things down pretty well w/o making unnecessarily long videos. Keep at it. -J
@NerdyRodent10 ай бұрын
Thanks! I was resistant to comfy to start with as well, but now it does seem comfy 😆
@bzikarius10 ай бұрын
Okay, got this, thanks, it works pretty well, if model impacts the style.
@AIWarper10 ай бұрын
Pro tip: you create this node yourself using SamplerCustom node that is native to comfy. Also allows for more customization
@KINGLIFERISM11 ай бұрын
Was working on exactly this. This is why this rodent... is the man. Now about the work... a lot of contrast needs to be removed.
@Afr0man4peace11 ай бұрын
Hi, thanks for this video. I will test this with my new realism SDXL models
@ДиДи-м3ю11 ай бұрын
It would be interesting to see a similar warflow for SDXL models
@NerdyRodent11 ай бұрын
You can change the models to SDXL ones and you'll be good to go :)
@jeffbull878111 ай бұрын
@@NerdyRodent Weird when I run this with SDXL it generates total garbage, works fine with 1.5... I wonder why??
@djivanoff1311 ай бұрын
why do I have a smaller image at the output?How can I increase it?
@bobbyboe10 ай бұрын
Excellent, thanks! Finally someone who uses notes to document information inside the workflow - very helpful! I did't know these notes, although I was already looking around for something like that - perfect! I am using SDXL Models in a similar workflow and that can help to "melt" a cut out figure into a new invironment... unsampling the "rough" composing automatically made by blending 2 images, and resampling. When using "collage of a..." as initial prompt and "photo of a..." when resampling. If you have some alternativ ideas for my "melting figure into new background"-process, I am always interested as I try to optimize that. The idea is to change figure while maintaining the same environment but make that figure integrate with ai into the background seamless.
@dkamhaji11 ай бұрын
Love it! Can this be applied to AnimDiff/ip adapter workflows?
@geoffphillips5293Ай бұрын
I couldn't get any decent results with SDXL just to save people spending time on this. I'd always get completely different images out to what goes in. But that's with the unsampler, with other methods that are out there, things aren't so bad and the controlnet stuff is helpful.
@niroknox11 күн бұрын
Dude I learn so much from your videos, I have already created 2 music videos with the stuff i learn from you. thank you so much for doing this! I gotta know - is this really your voice and accent?
@NerdyRodent10 күн бұрын
No, I’m not actually British but am in fact a space rodent from Alpha Centuri!
@niroknox10 күн бұрын
@ if this is AI, I have to know which model you used for this voice/accent
@niroknox10 күн бұрын
Also, is there a way to commission you (budget is there) . Let me know if you do consulting I would love to chat.
@ruuuuudooooolph11 ай бұрын
I dont understand why I am getting weird colors and the image looks incomplete. I am not seeing any errors too. Can someone share the original workflow with github ?
@FrancisHerding11 ай бұрын
How does the 1st step work, where you are not using the Output Control section? What are the pos & neg inputs for the Ksample if the Output Control section has been bypassed? It still works in your video even though it should not.
@attashemk898511 ай бұрын
Thanks a lot for remind, a hard to remember all SD possibilities)
@NerdyRodent11 ай бұрын
I know right… so many things to test and try!
@dkamhaji11 ай бұрын
Also - where does the CN Clips get their inputs from? Im trying to recreate without the everywhere nodes that cause conflicts on my setup :) Im getting an error at the Ksampler stage when the controlnet modules are turned on, and I have both pos/net prompts receiving from model node's Clip_Out. is that wrong? im not sure what else could be erroring. If I connect the first prompts to ksampler conditioning instead - it does work. something about the controlnet prompts..
@elmyohipohia93611 ай бұрын
same
@HestoySeghuro11 ай бұрын
Very cool. Will use this for my enhancing workflow I'm developping... works for XL?
@ceegeevibes13356 ай бұрын
cool. love unconventional things like this! thanks
@JanKowalski-ie6nw11 ай бұрын
Hello, could you make a video about DreamCraft 3d, a image to 3d method that came out a few days ago?
@AC-zv3fx11 ай бұрын
Wow! Since when does it exist? Is it something new? It seems so effective! Great video!
@NerdyRodent11 ай бұрын
It’s been out a while, but I had the pack installed for a different node and only just started playing with this specific one as I’ve been playing with noise a lot recently
@boricuapabaiartist11 ай бұрын
I laughed uncontrollably after the last image generation at the end of the video. Was that still using epic realism, or one of your custom models? That smile had some Grinch vibes too
@NerdyRodent11 ай бұрын
That’s my girlfriend! Also yes, epic realism there 😉
@JavierGarcia-td8ut11 ай бұрын
I do not find the workflow on your github, don't uploaded yet?
@hatuey632611 ай бұрын
just awesome thanks !!!
@contrarian88707 ай бұрын
@NerdyRodent Is this specific workflow on your Github? I can't identify it by name...
@NerdyRodent7 ай бұрын
Yup, the unsampler one is there
@polystormstudio11 ай бұрын
I don't think you posted the workflow. The last one in the list is from 3 days ago "SDXL_Reposer_Basic.png"
@Gh0sty.1411 ай бұрын
It's there. ctrl+f and search Unsampler and you'll find it.
@ceiridge11 ай бұрын
It's the Renoiser.png
@hleet11 ай бұрын
Impressive. Does this replace IPAdapter nodes ? I'm not fond of IPAdapater, way too much nodes to use it and you never know which model to load and it happens to crash a lot :/
@danilsi643111 ай бұрын
This channel is always fantastically entertaining😌 And thanks for putting together the cool stuff on git
@NerdyRodent11 ай бұрын
Glad you enjoy it!
@ДиДи-м3ю11 ай бұрын
Try to use "reference" model + "Canny" for SDXL models. This will give you much more interesting results than older models (with high "cfg"). Try taking a reference image from the street and writing "snow" in promt text...
@NerdyRodent11 ай бұрын
Thanks for the tip!
@ДиДи-м3ю11 ай бұрын
@@NerdyRodentIn principle, you don’t need to redo anything in your work flow (for SDXL), just replace the old models with new T2i SDXL Line & Depth (I checked - everything works well)
@SixFt1210 ай бұрын
Zoe-DepthMapPreprocessor and LineArtPreprocessor failed to load and fail to import when using the manager to install missing custom nodes. Is there an alternative for these nodes? How do I download them if there are alternatives? Thanks for any help on this.
@NerdyRodent10 ай бұрын
You can drop me a dm on www.patreon.com/NerdyRodent 😀
@eucharistenjoyer11 ай бұрын
Amazing stuff. Have you by any chance developed this :P? It seems to have gotten out of the radar by most channels. Not related, but since you're extremely knowledgeable: I'm not sure if you have done any video showing the "CFG Rescale" node, but do you know how it works?
@NerdyRodent11 ай бұрын
Dynamic thresholding I covered a while back for A1111 in - AMAZING A1111 Stable Diffusion Extensions You Might Have Missed! kzbin.info/www/bejne/qoGYqqxsdpl6gNk - it’s basically that
@ddiva197311 ай бұрын
Can you do the Animorph book cover transformation?
@vindyyt11 ай бұрын
Am I dumb, or the only .json workflow in your link is for the QR monster, and the rest are just .png files? I can't figure our where to download the comfyui workflows
@NerdyRodent11 ай бұрын
you can scroll down to find the workflows 😀
@patheticcoder408111 ай бұрын
What's the advantage to adding noise with the ksampler and give him another prompt?
@MeMine-zu1mg10 ай бұрын
I get a huge error at the ksampler Advanced node that starts off "mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320)"
@summerdesire14108 ай бұрын
sdxl mix sd15
@JKG-77711 ай бұрын
Thanks for the video. Where do I get the control net loras from?
@NerdyRodent11 ай бұрын
The resources section has links to the stabilityai control loras, sd models and more!
@JKG-77711 ай бұрын
@@NerdyRodent I found them. Thanks.
@ffq-ym4wr2 ай бұрын
Do you have a model download address for control lora?
@AvizStudio11 ай бұрын
What the difference from regular image to image?
@JustFeral11 ай бұрын
Did you watch the full video?
@vintagegenious11 ай бұрын
@AvizStudio From their Github: "This node does the reverse of a sampler. It calculates the noise that would generate the image given the model and the prompt." Img2img would just take the original image to generate. while here you take a noise that would generate that image, the point is to be able to do variations of that image
@AvizStudio11 ай бұрын
@@vintagegenious Hmm OK interesting
@AvizStudio11 ай бұрын
@vintagegenious Is that equivalent to "guessing the seed number of given picture"? Pretending the picture is generated?
@vintagegenious11 ай бұрын
@@AvizStudio the seed will decide the noise you add to the input image latent(with 1.0 denoise you have only input image, and with 0.0 you have only noise, so txt2img) . Here it gives you the latent, not the seed, so you can consider it as finding the best input image latent, denoise and seed. I'm not sure if for all latent there exists a seed that will give that latent, if it's the case then you are right.
@Kikoking-y9b5 ай бұрын
I have a question: Was it a mistake from You that you connected the input of Unsampler from ClibTextEncode instead from the Controlnet output? i tried working with your workflow to make a face look angry. All the time the output had high contrast which made the output look burned ,untill I changed the Positive and Negative inputs of the Unsampler . U had them connected from the ClibTextEncode to Unsampler directly . Now I connected the Apply Controlnet output to the unsampler and it became very good normal without any contrast.
@NerdyRodent5 ай бұрын
Good spot 😉
@Kikoking-y9b5 ай бұрын
@@NerdyRodent thanks for your reply. can I ask you another question? I want to learn similar things like unsampler to change face details or make changes in the (latent space) similar to Unsampler instead of things like inpainting . Do you know what I should learn?
@NerdyRodent5 ай бұрын
Sounds as if you’d probably like refacer then! Refacer - Painting to Realistic (and Vice-Versa) in ComfyUI kzbin.info/www/bejne/qGisq2uGqJyFaNU
@Kikoking-y9b5 ай бұрын
@@NerdyRodent you are great 😃 thanks 🙏
@jcboisvert144611 ай бұрын
Thanks
@NerdyRodent11 ай бұрын
And thank you too!
@RonnieMirands11 ай бұрын
Another great workflow for free. Amazing! I am getting an error on Zoe Depth Map. Just bypassing this node it works. Maybe not installed?
@NeOBRINFO11 ай бұрын
i can't find this workflow on link u provide ?
@jimdelsol194111 ай бұрын
Thanks !
@NerdyRodent11 ай бұрын
Welcome!
@hatuey632611 ай бұрын
it's strange : i ddon't have the resolution in the node ZOE so i have an error ! i've dowloaded the model still have error
@NerdyRodent11 ай бұрын
Check the troubleshooting section for info on how to fix your local installation. 90% of the time you’ll need to update all 😊
@AmirBechouch11 ай бұрын
is there any way to make comfyui running on linux using Rocm 4.0, as I believe this is the latest supported version for my rx 580
@NerdyRodent11 ай бұрын
I’ve not got an AMD card, but my guess is that should work just fine! How to Install ComfyUI in 2023 - Ideal for SDXL! kzbin.info/www/bejne/aKOWpoCVl5itd5o
@Herman_HMS11 ай бұрын
my images with controlnet are coming out extremely polarized and overexposed even with low strength and negative prompts on a standard 1.5 model. Any advice how to fix it?
@NerdyRodent11 ай бұрын
Prompts certainly help for me! Things like “dark” or “high contrast” in +ve, or like I show in the video with -ve prompting
@Herman_HMS11 ай бұрын
@@NerdyRodent will try, thanks for reply!
@betortas3311 ай бұрын
Does this works with sdxl?
@Moony_ultimate11 ай бұрын
Exist a way to use unsampler in automatic 1111??
@bgtubber4 ай бұрын
I think "Noise Inversion" in "Tiled Diffusion" does something similar. Look it up. It's in the img2img tab.
@kallamamran6 ай бұрын
My Unsampler only generates a black image
@kallamamran6 ай бұрын
Changed the sampler. Now it works
@bizarreadventurejojos537911 ай бұрын
It's a great video but my result image was 512 x 768 without any error and not upscale to higher resolution when I was inputs the 512 x 768 imagemand using your workflow, I don't know why you can input the lower resolution then output the higher resolution? you said your image with automatic upscaled to 1136 x 1440 resolution, i don't know why I can't do that. 😅 thanks
@generalawareness10111 ай бұрын
Works but the mtp notes thing it downloaded the missing node and I could no longer use ComfyUI as it was unresponsive until I deleted it.
@LouisGedo11 ай бұрын
👋
@NerdyRodent11 ай бұрын
👋
@CoreyJohnson19311 ай бұрын
I think you're a grat teacher... sort of. I like to build these myself, so the .json or a better explanation of the nodes is necessary. It's frustrating getting the abridged version when I would like more indepth instructions. Please find time to break these down like other ComfyUI KZbinrs.
@NerdyRodent11 ай бұрын
You can indeed save the workflow image provided as a .json file if you like! What is it specifically about the unsampler node that you'd like to know? It basically does just what I show in the video (and as it’s name suggests!) As for building your own, check out my ComfyUI Essentials video - kzbin.info/www/bejne/jH6cpKGpqtSkeMU
@MrSporf11 ай бұрын
I think you're an amazing teacher. please keep doing them exactly like you are now as those long-winded ones are frustrating. keeping them focused and clear like you do is much better and thank you for the workflow
@CoreyJohnson19311 ай бұрын
@@MrSporf Some people need additional support. Luckily, I crafted a better workflow after I realized there are too many nodes on screen. It is able to do everything and requires less connections between nodes using "efficient" nodes instead of the typical ones.
@MrSporf11 ай бұрын
@@CoreyJohnson193 good for you, well done. I guess you didn't need that extra support after all. Also, the workflow images are much better than the json files because you can actually see what is going on in the workflow.
@CoreyJohnson19311 ай бұрын
@@MrSporf Why not just have both?
@TickleMeTimbers11 ай бұрын
Honestly she looks completely different, it's not even the same face at all. Everything is very disproportionate from her mouth to her nose to her eyes and basically everything else. It doesn't look like the same person except at a distance if you squint.
@tonikunec11 ай бұрын
You gotta be blind mate... I've got the workflow and it does an excellent job.
@TickleMeTimbers11 ай бұрын
@@tonikunec does it do a better job than the girl shown in the video thumbnail? Because if not, then I can safely say you sir are the blind one.