Пікірлер
@ambientvibes2118
@ambientvibes2118 Күн бұрын
I'm sorry I couldn't find the upscaler and the link didn't work. Please can somebody tell me how I can download an upscaler for this workflow? Thank you 😁.
@mayraperkins6361
@mayraperkins6361 Күн бұрын
Thank you for solidifying my thoughts on not using a catalog and it's ok; I can't find a real reason to use a catalog.
@dxnxz53
@dxnxz53 2 күн бұрын
it blew my mind that you can load an entire workflow from the image! thanks for the great content.
@albertr-dev
@albertr-dev 3 күн бұрын
A doubt : How to control the resolution impact of controlnet images (non square mostly) in the generation (1024*1024). It causes crop effect which usually crops the control image to a square. Tried converting the image to square by blur extension/using color, but that has effect on the maps
@andu896
@andu896 3 күн бұрын
You mentioned you use Revision, but nowhere in the video do I see revision.
@JackTorcello
@JackTorcello 3 күн бұрын
I am trying this technique. For some reason - and not all cases - there are green smudges and weirdnesses on their faces?!
@JackTorcello
@JackTorcello 4 күн бұрын
Is it like ADetailer in A1111 and it fixes ALL detected faces? If not, what ComfyUI node would act like ADetailer?
@n3bie
@n3bie 4 күн бұрын
When I followed this tutorial , after I created the primitive node and was going to connect it to the CLIPTextEncodeSDXL node as you did @0:54 I had a Text_I linker, but not the one for the text_g linker. As you do in this video. What am I doing wrong? I'm building this workflow underneath (and disconnected from) the default workflow. Could that be causing it?
@Nakasasama
@Nakasasama 5 күн бұрын
for some reason the manager doesn't show up.
@britebay
@britebay 5 күн бұрын
Excellent tutorial. Thank you!
@MR.LTN_STUDIO
@MR.LTN_STUDIO 6 күн бұрын
感謝推薦!!!!
@TyHoudinifx
@TyHoudinifx 6 күн бұрын
This is great. Question though: What if I already have a pipeline setup in comfy that generates a number of images and I want to combine all these images with custom text like: What model they are from? I basically need a way to custom setup the graph without using this method as everything is pre-defined. Is there a way to make an image graph after a whole bunch of images are already created like in a folder? Problem is that the samplers are already configured and can't just plug and play like you way you are showing here. One is using a lora, another is sdxl turbo so uses custom sampler, but most are using regular ksampler set to the same values. I'm just looking to compare models themselves and compare the outputs. Not sampling type issue.
@orcsheep
@orcsheep 7 күн бұрын
really high quality tutorial, logic and clear!
@lastlight05
@lastlight05 7 күн бұрын
LOL how do you install this LCM?
@jameswilkinson150
@jameswilkinson150 8 күн бұрын
🎨🖌️I’m an artist and I’d love to use this to create variants of my work and also generate animations. Is this possible using this? Sorry if it’s a dumb question but I’m totally new to this. 🖌️🎨
@DaniloFreguglia
@DaniloFreguglia 9 күн бұрын
I have an NVIDIA T1000 with 4GB VRAM (I know this is a big lack), testing a workflow with a ControlNet Depth map MiDas. I actually do not see any improvements by adding this Lora in my workflow 🥲
@ethanhorizon
@ethanhorizon 9 күн бұрын
Thanks for the tutorial! Is the "noise seed" in Ksampler Advanced same as "seed" in Ksampler? You set noise seed as 4, what's the meaning of the number? What if I left it as zero?
@Xavi-Tenis
@Xavi-Tenis 10 күн бұрын
wow really you explain just like pros, well you are one pro.
@Xavi-Tenis
@Xavi-Tenis 10 күн бұрын
perfect explanation
@alexmehler6765
@alexmehler6765 11 күн бұрын
does it also work on hands which dont wave directly at the camera or for cartoon models ? i dont think so
@opensourceradionics
@opensourceradionics 12 күн бұрын
Why is it not possible to use ControlNet with other models than SD?
@BlackDragonBE
@BlackDragonBE 12 күн бұрын
Great videos, but those live streams in the playlist exclusive to members are really annoying. I love the way you explain things, but those really kill the vibe for me. More than half of the playlist are those live streams and almost every video includes "if you watched the last live stream...". I wish I did, but I don't have the money. Thanks for what you do, but I'm going to look elsewhere.
@dmarcogalleries254
@dmarcogalleries254 13 күн бұрын
Can you next time go more on SD3 Creative upscaler? IK don't find much info on it. So you don't use it with a 2k image? it sats 1000 or less? I'm trying to figure out if it is worth it at 25 cents per upscale. Thanks!
@RajeshJustaguy
@RajeshJustaguy 16 күн бұрын
reminds me a bit of blender
@ursooperduper
@ursooperduper 16 күн бұрын
What a fantastic introduction, thank you!!! I'm just getting started with AI exploration and am still familiarizing myself with terms (like KSampler, VAE, Lora, checkpoints, etc.) Can you recommend a resource to better understand these things in greater detail? Books, videos, articles... I'm a sponge! Help me :)
@byxlettera1452
@byxlettera1452 17 күн бұрын
The best video I have seen so far. Very clear and it gets to the point. Nothing to add. Thanks
@sunshineo23
@sunshineo23 17 күн бұрын
I'm just shocked after you correct the starting denoise to 0.3, the change to the image is almost like edit the image by prompt. This is going to change the world for a lot of people
@bigglyguy8429
@bigglyguy8429 18 күн бұрын
Is this running entirely offline? Because when I tried it seemed to be trying to connect online? I created one image, and when I refused to give it online access it just froze up?
@franknichols2221
@franknichols2221 18 күн бұрын
How do I become a member?
@franknichols2221
@franknichols2221 18 күн бұрын
Thanks!
@monkeywunky4565
@monkeywunky4565 20 күн бұрын
Hey guys I had this issue where nothing would happen (a black box would appear in the face preview and no change would be made to the face when using the face detailer) turns out the image was just too big for it to look at. so i downscaled my image and it worked. just incase anyone is having the same issue
@arvinsim
@arvinsim 20 күн бұрын
What GPU is being used? That was fast!
@naimurrahman1406
@naimurrahman1406 21 күн бұрын
controlnet, ipadapter
@sedetweiler
@sedetweiler 20 күн бұрын
Pretty fun to comment on an old post with tech that didn't exist back then. It is a lot easier now for sure.
@BiancaMatsuo
@BiancaMatsuo 21 күн бұрын
Is this possible to be done with other WebUi, like Forge WebUI?
@BrainSlugs83
@BrainSlugs83 21 күн бұрын
Turns out I'm a lot more face blind that I previously thought I was! 😅
@guillaumericardsavard
@guillaumericardsavard 21 күн бұрын
Really great video! I recently discovered what you can do with ai image generators and I'm obsessed. I'm spending a lot of time these days on civitai and whatnot just to look at what people do with different models, because all I have is a CPU. Can't afford a GPU. I've installed ComfyUI and played with the nodes already, but generating a 512x512 image takes about 10 minutes. That is, with default settings. Will watch the other episodes and wait for the times where a CPU will run models just like GPUs do.
@dominikstolfa4579
@dominikstolfa4579 21 күн бұрын
I would like to use this method to add details to already existing non-ai picture. Is it possible?
@dasmatthews
@dasmatthews 21 күн бұрын
Hey Scott and community! First up, thank you so much for the very well and not too quickly explained tutorial!! I'm having a little problem though and I don't know exactly what to change/do to correct it... At 8:19 you're showing the long list of ControlNet Models and you say that we'll see the list if we did it correctly and I followed your video step by step but I still don't get that list... any idea or hint on what might've gone wrong? Thanks so much in advance!
@dominikstolfa4579
@dominikstolfa4579 21 күн бұрын
The link for upscaler models changed and now I can’t find them.
@radiantraptor
@radiantraptor 22 күн бұрын
I can't figure out how to make this work. Even if the MeshGraphormer produces good results and the hands look nice in the depth map, the hands in the final image often look worse than in the image before MeshGraphormer. It seems that the second KSampler does mess up the hands again. Is there anything to avoid this?
@sedetweiler
@sedetweiler 21 күн бұрын
you can always use a different model for the 2nd sampler. Be sure you use a different seed! That was one I tripped over.
@chucklesb
@chucklesb 14 күн бұрын
@@sedetweiler wish this helped. I'm using the same model you are in the video and it just makes it worse.
@dominikstolfa4579
@dominikstolfa4579 22 күн бұрын
Wow. This actually looks simpler to me than using control net on Automatic1111.
@larryross9380
@larryross9380 22 күн бұрын
Perhaps things have changed since this was published nine months ago, because this workflow just gave me dark, abstract images. But I learned a lot about how to build out a workflow! Thanks!5
@lanoi3d
@lanoi3d 23 күн бұрын
Thanks, this video made me realize SD isn't for me. This is WAY too complicated. It's no wonder now why most AI art at high res looks like crap if you look closely at the details.
@merttkn
@merttkn 23 күн бұрын
When I try to install sd_xl_1.0.safetensors I get error which says something like "wrong json extension" after I wait long time while trying to install it. Can anyone help about it. Thanks.
@merttkn
@merttkn 24 күн бұрын
I get (IMPORT FAILED) error while trying to install ReActor Node for ComfyUI. Can anyone help about this problem? Thanks. It says "install failed: ReActor Node for ComfyUI"
@sparkilla
@sparkilla 24 күн бұрын
Thanks for the information, I want extremely detailed hyper realistic images and this helps out a lot adding a sampler before my main sampler, also doing face swapping and supir upscaling in the same workflow results are terrific and also about 10-15 seconds faster per pic now as well.
@PekaCheeki
@PekaCheeki 24 күн бұрын
this shit is so schizo