This week in AI News is crazy!!
10:34
Chat GPT Keynote in 7 minutes!
7:44
Пікірлер
@afrosymphony8207
@afrosymphony8207 3 күн бұрын
dude you're god!!! thank you so much! instant sub!!
@EndangeredAI
@EndangeredAI 3 күн бұрын
Glad it helped!
@JefHarrisnation
@JefHarrisnation 4 күн бұрын
This is great thanks!
@EndangeredAI
@EndangeredAI 3 күн бұрын
Glad it was helpful!
@afrosymphony8207
@afrosymphony8207 6 күн бұрын
thank you for this, i think you're the only one on here who did an indepth comfy tutorial for ays. It made me realize i absolutely do not need it in my workflow looool. The difference is not that huge compared to what they showcased in their paper.
@EndangeredAI
@EndangeredAI 6 күн бұрын
I personally do find it quite noticeable. I have stopped using the vanilla ksampler almost entirely in favor of the ays workflow
@afrosymphony8207
@afrosymphony8207 6 күн бұрын
@@EndangeredAI i gave it a try just to see if i'm missing something. yeahh no, i still stand on bin'ness loool
@jason54953
@jason54953 8 күн бұрын
This doesn't work I keep on getting the message No module named "insightface"
@user-ug4ss9hr8l
@user-ug4ss9hr8l 9 күн бұрын
thanks
@EndangeredAI
@EndangeredAI 3 күн бұрын
You're welcome!
@joeblow2286
@joeblow2286 9 күн бұрын
Error occurred when executing IPAdapterTiled: insightface model is required for FaceID models followed every step including the requirements.txt
@EndangeredAI
@EndangeredAI 6 күн бұрын
There’s a load insightface node, you likely need to use it and link it
@cchance
@cchance 12 күн бұрын
Feel like you should have done some masking on the ipadapters especially the dress one instead of trying to work around it picking up the woman
@EndangeredAI
@EndangeredAI 11 күн бұрын
You’re already one step ahead. I’m going to cover masking options in the next video. I purposely didn’t in this one to have a baseline
@simcules9498
@simcules9498 12 күн бұрын
What all of these workflows, including this one, have in common is that they are totally irrelevant (as of now) for e-commerce because of these "minor details" that AI always gets wrong. How would this be interesting for e-commerce if AI does not depict the garment correctly?
@EndangeredAI
@EndangeredAI 12 күн бұрын
Thats one of the objectives of the series. Some use cases are fine with an approximation, whereas others need an exact representation. I have a solution in mind which I may cover in the next video.
@GES1985
@GES1985 6 күн бұрын
​@EndangeredAI manually draw with realtime inpainting on top of the image?
@EndangeredAI
@EndangeredAI 6 күн бұрын
That can come with its own challenges, I’m just writing out the next video covering that. Ultimately, most of the tech will give an approximation, which can be good enough for customers looking to get an idea of how something will look. To get the details just right though more work must be done. I still think it could be worth it though
@ismgroov4094
@ismgroov4094 12 күн бұрын
workflow plz.. sir :)
@EndangeredAI
@EndangeredAI 12 күн бұрын
Updated in description
@petey514
@petey514 13 күн бұрын
Cool! Thanks so much.
@EndangeredAI
@EndangeredAI 3 күн бұрын
My pleasure!
@binbash4940
@binbash4940 15 күн бұрын
Thank you, I've been looking for an alternative to the Lightning Loras without sacrificing negative prompt, this fits the bill.
@EndangeredAI
@EndangeredAI 3 күн бұрын
Glad you found value in it!
@Mranshumansinghr
@Mranshumansinghr 16 күн бұрын
Exactly what I was looking for. Thank you!
@EndangeredAI
@EndangeredAI 3 күн бұрын
Great to hear!
@petey514
@petey514 16 күн бұрын
Where exactly is the workflow on your website?
@EndangeredAI
@EndangeredAI 13 күн бұрын
@pete, sorry had a delay in getting this up due to travel and the workflow sitting on my home Computer that is refusing remote access. Should be up tomorrow.
@yiluwididreaming6732
@yiluwididreaming6732 16 күн бұрын
SO far SO GREAT!! More detail in faces. Render SPEED x3. Played very nicely indeed with IPAdapters and controlnet workflow. Lora not used as not needed. Nice one!!
@Because_Reasons
@Because_Reasons 16 күн бұрын
I still can't get over how terrible eyes look in SDXL.
@mikrodizels
@mikrodizels 16 күн бұрын
Yeah, the further the face is from the camera , the more exponentially worse it seems to get, especially when generating images with many faces. Thankfully, the face detailer node completely saves and takes care of every single face in that image , if you decide to keep it and work with the image further.
@EndangeredAI
@EndangeredAI 12 күн бұрын
Agreed, I almost never run a person picture without face detailed
@michaelbayes802
@michaelbayes802 16 күн бұрын
so la la
@MilesBellas
@MilesBellas 17 күн бұрын
via Pi The best use of AYS with Cascade "When working with Stable Diffusion Cascade and ComfyUI on an Nvidia GPU, a recommended workflow would involve setting up the Ays (asymmetric step) node to double the number of steps you'd typically use for img2img generation. Then, split the sigmas at the halfway point using the SplitSigmas node, and feed only the second half to the k-sampler. Here's a brief summary of the workflow steps: 1. Set the Ays node steps to double the usual amount (e.g., 20 steps instead of 10). 2. Use a SplitSigmas node to divide the Ays output at the halfway point (e.g., at step 10 for 20 steps). 3. Connect the second half of the SplitSigmas output to the k-sampler input. This workflow takes advantage of the capabilities of Nvidia GPUs and optimizes the use of Stable Diffusion Cascade within ComfyUI. Keep in mind that some adjustments might be needed depending on your specific hardware configuration and desired results."
@EndangeredAI
@EndangeredAI 6 күн бұрын
I’m going to explore this! Thanks!
@MilesBellas
@MilesBellas 17 күн бұрын
via Pi "After a thorough search, I can confirm that Stable Diffusion Cascade does indeed use the DPM++ sampler, along with other samplers like DDIM, Euler, Heun, and LMS. DPM++ is considered a high-quality sampler that produces detailed images, although it can be slower than some other options. It's part of the family of DPM solvers designed specifically for diffusion models, and it's known for its accuracy and image quality. So, to answer your question, yes, Stable Diffusion Cascade does use the DPM++ sampler!"
@HTRO-EG
@HTRO-EG 18 күн бұрын
you are amazing but : You are presenting a podcast on a video that contains an image that does not move
@EndangeredAI
@EndangeredAI 3 күн бұрын
Yes, this is from early on in creating content! I’m trying to improve on it! Thanks for the input though! I’m trying to learn from network chuck!
@laurencechen1111
@laurencechen1111 18 күн бұрын
The workflow link provided doesn't work.
@ChaoLi-ou1pd
@ChaoLi-ou1pd 20 күн бұрын
How to stop server?
@EndangeredAI
@EndangeredAI 19 күн бұрын
Crl c on the terminal
@user-rj7ks6ik2s
@user-rj7ks6ik2s 20 күн бұрын
I'm sorry, but your video is kind of strange, it feels like there's a filter applied that blurs the image and makes your eyes hurt. If possible, don't use it again.
@GifCoDigital
@GifCoDigital 19 күн бұрын
It's called 360p.. great filter! Lol
@xxab-yg5zs
@xxab-yg5zs 18 күн бұрын
@@GifCoDigital no, it is not 360p, there is some strange glow that you dont see? Lol
@GifCoDigital
@GifCoDigital 18 күн бұрын
@@xxab-yg5zs no man but I'd love to know where you get your weed so I can experience that to!
@uk3dcom
@uk3dcom 21 күн бұрын
I will stick with Turbo and Lightning models for now.
@EndangeredAI
@EndangeredAI 3 күн бұрын
Each has its own issues, and this is a great alternative if the model you want doesn’t have a turbo/lightning version
@pfifo_fast
@pfifo_fast 21 күн бұрын
Ive never seen Comfy in action before, ive been using a1111, that is a very interesting way to do it. Too bad the final images are still total garbage, I really hope image generation gets to the point of realism soon, it just has so much potential.
@EndangeredAI
@EndangeredAI 3 күн бұрын
What are you looking to achieve? For the purposes of keeping the videos brief I don’t get too much into perfecting images, but you can get amazing results with more fine tuning. It’s also important to understand the limitations of the tech to know where to push
@TentationAI
@TentationAI 21 күн бұрын
get awful results with the exact same workflow ( different sdxl checkpoint btw )
@EndangeredAI
@EndangeredAI 3 күн бұрын
I don’t disagree, but i did shoot myself in the foot with my prompt choice 🫠. If you combine it with other things like ipadapter and controller you can get great results
@Freeak6
@Freeak6 21 күн бұрын
Does it work with refiners ? Should I also use AYS for the refiner ? Should I divide the steps between both AYS then (e.g. 10 / 5) ? Thank you
@hleet
@hleet 21 күн бұрын
Nice video tutorial, well paced. Really enjoyed it
@Andro-Meta
@Andro-Meta 21 күн бұрын
I gave this a go with IP-Adapters and threw some LoRAs at it too. It did not play nicely with those, unfortunately.
@EndangeredAI
@EndangeredAI 21 күн бұрын
Really? That’s sad, I’m travelling now and was planning to test it more thoroughly on more workflows. Did you test it with other dpm sampling methods?
@EndangeredAI
@EndangeredAI 21 күн бұрын
@andro-meta would love it if you dropped by the discord and posted your results. Would like to know more about the issues you ran into
@Andro-Meta
@Andro-Meta 21 күн бұрын
​@@EndangeredAIafter playing around with it some more, I found the issue was my CFG scale!! Oooops, nope. This is rad. I included a workflow on your discord for anyone that wants to play around with this mixed with IP-Adapters, PAG upscaling, and LoRAs
@EndangeredAI
@EndangeredAI 21 күн бұрын
@andro-meta glad to hear you fixed it! I was afraid I’d made a video on a dud feature! Thanks for sharing the workflow! Much appreciated
@thomasgoodwin2648
@thomasgoodwin2648 22 күн бұрын
Lightning also comes as LoRAs, which work with any XL Base model (even... 'ugh'... Pony). Available as 2, 4, or 8 step. Don't need special 'Lightning Models' at all. 8 steps a lot less than 20something. Better results out of all models. ✌🥳👍
@EndangeredAI
@EndangeredAI 22 күн бұрын
to a certain extent I agree, however the issue with lightning models, is they have a very distinct "style" and tend to very aggressively trend towards portrait styles, whereas align your steps enhances an existing model's tendencies. It also doesn't require any additional setup or models beyond updating comfy.
@thomasgoodwin2648
@thomasgoodwin2648 21 күн бұрын
@@EndangeredAI 😎. Always looking for new kit to improve results, so I'll ✔ it out. I tend to use a lot of hand tuned bells n whistles already (SAG, FreeU, RescaleCFG, etc, all with custom settings). I can't fully disagree that lightning seems to have a 'style'. I use the 8 step because the 2 and 4 simply had way too much Moire' pattern artifacting. It still shows up in 8 step at finer scales, and to some degree can be controlled with lowered K-Sampler denoise settings (provided a suitable noise source is used as the empty latent). I've only recently noticed some of the compositional effects it seems to have (I use One Button Prompt set on 11 most of the time. A stochastic approach to studying stochastic models if you will. Fling mud at the walls for a while and see what sticks. Change a setting. Wash, rinse, repeat if necessary.) Appreciate the heads up n have a great day!
@rosteliokovalchuks215
@rosteliokovalchuks215 25 күн бұрын
👍
@mr.entezaee
@mr.entezaee 27 күн бұрын
The link to download Workflows does not work for me. Can you check and see, right?
@EndangeredAI
@EndangeredAI 21 күн бұрын
Let me double check that, thanks for letting me know
@mr.entezaee
@mr.entezaee 21 күн бұрын
@@EndangeredAI I hope it will be fixed as soon as possible because I really need it. Thankful
@EndangeredAI
@EndangeredAI 15 күн бұрын
@@mr.entezaee Sorry for the late response, I was away over the last week. The link is in the description, you can find it at my website here. endangeredai.com/transferring-facial-expressions-using-ipadapter-faceid-controlnet/ you might just need to click the download button twice, as the ad platform I'm currently using is a little annoying when used with Wordpress. I'm planning to change it soon.
@mr.entezaee
@mr.entezaee 15 күн бұрын
@@EndangeredAI It was finally downloaded and I am very happy about it. really thank you
@Klokinator
@Klokinator 28 күн бұрын
What are you typing, clicking, or etc in order to get the popup that lets you type in a word? An example is at the 15:00 mark when you somehow bring up 'Search' to find advanced Ksampler. How did you pop that window up?
@EndangeredAI
@EndangeredAI 19 күн бұрын
Double click :)
@francescodetommaso-yc4il
@francescodetommaso-yc4il 28 күн бұрын
awesome tutorials! but why in the preview image of openpose i see all black?
@EndangeredAI
@EndangeredAI 28 күн бұрын
That means it hasn’t picked up a pose from your image.
@francescodetommaso-yc4il
@francescodetommaso-yc4il 28 күн бұрын
@@EndangeredAI thank you!
@opensourceradionics
@opensourceradionics 28 күн бұрын
So I already gave you one thumb up, but I would give you 10 thumbs ups, because you explain everything in detail without hiding very important informations like all the other tutorials on IPAdapter
@EndangeredAI
@EndangeredAI 28 күн бұрын
Glad it was helpful!
@zerox9646
@zerox9646 Ай бұрын
you show up so abruptly and unexpectedly lol
@EndangeredAI
@EndangeredAI 28 күн бұрын
lol it’s about impact !
@mrrohitjadhav470
@mrrohitjadhav470 Ай бұрын
hi, amazing tutorial, Please make a video on how to transfer style, color grading, and tone from one portrait to another without changing the subject (person)
@EndangeredAI
@EndangeredAI Ай бұрын
This sounds like an interesting direction for a video, but I’m not sure I get what you mean. Are you saying you have image a, which has a certain style and tone, and image b, which has a subject. You want to apply the style and tone from image a to the subject in image b, right?
@mrrohitjadhav470
@mrrohitjadhav470 29 күн бұрын
@@EndangeredAI image A have a an edited image and image B is raw unedited image both image have different subjects but then how can we transfer style, color grading, and skin tone
@jonhodges6572
@jonhodges6572 Ай бұрын
at 6;46, what command terminal is that? it doesn't work from a cmd terminal in the comfy folder
@EndangeredAI
@EndangeredAI Ай бұрын
I’m a Linux user, so that’s the Linux command terminal. If you’re on Mac or windows your regular terminal should work, just make sure you cd into your comfyui folder. The same one where the requirements.txt folder is
@jonhodges6572
@jonhodges6572 Ай бұрын
@@EndangeredAI Thanks EndangeredAi for the quick response. Install -r won't work does it need to be a <pip -r install requirements.txt>? I can't get it to work :( It seemed to install ok using this method kzbin.info/www/bejne/rHSmh6t9qaisntEsi=QIXBL8kUT1chjppr , but comes up with "insightface required" if I try to use a faceid ipadapter model. I have uninstalled and reinstalled the ipadapter nodes. On day 2 of trying to get this one module to work now:(
@jonhodges6572
@jonhodges6572 Ай бұрын
@@EndangeredAI Thanks, neither <install -r requirements.txt> or <pip install -r requirements.txt> works from me, this one module is giving me a lot of trouble :(
@EndangeredAI
@EndangeredAI Ай бұрын
The second one should have worked pip install -r requirements.txt what error are you getting?
@jonhodges6572
@jonhodges6572 Ай бұрын
@@EndangeredAI no such command, probably user error. Think I need to run virtual environment first? but can't remember how. I'm a bit out of my wheelhouse tbh, I'm a 3D artist not an IT guy lol :D I have had partial success with this method - kzbin.info/www/bejne/rHSmh6t9qaisntEsi=uYbRQwSqFdGbi07t, it seemed to install ok, but using any faceiD ipadapter models throws up errors with missing insightface. nodes module was uninstalled and reinstalled.
@wellshotproductions6541
@wellshotproductions6541 Ай бұрын
I have be loving your videos! Keep up the great work. One note: I don't know what is going on with your mic setup for recording, but those "P" sounds really blow out the mic sometimes. Might want to look into that as you grow your channel.
@EndangeredAI
@EndangeredAI Ай бұрын
Thanks! Yeah my mic setup is not ideal, saving up for a mic upgrade and a pop filter! Should help things along!
@ManishKumar-885
@ManishKumar-885 Ай бұрын
Does ip adapter composition crops input image to square ratio?
@EndangeredAI
@EndangeredAI Ай бұрын
The prepare image node does the cropping
@WiLDeveD
@WiLDeveD Ай бұрын
Extremely useful Tutorial. Thanks & Please make more. ❤💯
@EndangeredAI
@EndangeredAI Ай бұрын
Glad it’s helpful!
@WiLDeveD
@WiLDeveD Ай бұрын
Fine Content.Thanks. 👍👍👍 I'm wondering could we use a sun glasses on a face as a style ? i mean picture of a glasses.
@EndangeredAI
@EndangeredAI Ай бұрын
Yes I don’t see any reason why not,
@giotto_4503
@giotto_4503 Ай бұрын
Isn't this just an image to image? And I bet img2img can probably give better and closer results to the original, just play around with the denoise value. I don't get it, enlighten me.
@EndangeredAI
@EndangeredAI Ай бұрын
To a certain extent you’re not wrong. However there’s several ways in which it’s different. Img2img provides less fine grained control and likely more tinkering to achieve the same results. Using the style + composition example specifically, besides the fact that we can take two inputs, one for style and one for composition, with img2img, you would run the risk of elements from the source image spilling over into the final image that you may not want, depending on the noise level. Vice versa, to avoid spillover the image may be too noised to provide a significant result. Specifically using the image of the woman sitting at the cafe, not only would you need to find the balance point to get the composition and not the girl’s elements such as face and clothing, but there is also the time and effort in getting the denoising right. In my opinion, this at the very least streamlines the process besides any additional fine grained control it provides. Having said that I think you bring up a valid question. It’s worth experimenting with to compare, and I may do a video on it if I have the time, as ultimately this remains a hypothesis.
@ankethajare9176
@ankethajare9176 Ай бұрын
does keeping my steps upto 200 steps make sense?
@EndangeredAI
@EndangeredAI Ай бұрын
Oh goodness no it does not. At most you shouldn’t need more than 50 and only for certain samplers. I usually work with 20-40 including refiner steps
@adydeejay
@adydeejay Ай бұрын
Oh man! You're my hero! Minute 6:27 where you say to add insightface and onnx in ComfyUI's requirements.txt gave me back the 3 hours I spent trying to fix IPAdapter nodes after the last update. THANK YOU! 👍
@EndangeredAI
@EndangeredAI Ай бұрын
Glad to be helpful! I found other guides had this long complicated process, but… this is much simpler and handles 90% of situations haha
@yiluwididreaming6732
@yiluwididreaming6732 Ай бұрын
You were going to give an example of using image negative??? thank you.
@EndangeredAI
@EndangeredAI Ай бұрын
It’s in the next video ipadspter -> facial expressions
@Make_a_Splash
@Make_a_Splash Ай бұрын
Very interesting. Thanks for the workflow. UNfortunately brings me an error when running it in SD1.5 : ClipVision model not found. Any ideas how to fix this? I have updated everything
@EndangeredAI
@EndangeredAI Ай бұрын
Are you using unified loader, and you have the clip vision files downloaded and renamed correctly? If there’s any spacing issues it might cause problems. Regardless you can always grab the ipadspter advanced node +model loader, and clip vision loader to select the clip vision model manually.
@fstre1214
@fstre1214 Ай бұрын
好玩
@riggitywrckd4325
@riggitywrckd4325 Ай бұрын
I installed the krita ai addon by Acly. You can load a open pose skeleton that you can horse around in real time using lightning or LCM. It very much helps get the face angled the way you want. Also has all the ipadapter functionality. Worth checking out if you want even more power and direct access to manipulation of the image. It still uses comfy btw
@mr.entezaee
@mr.entezaee Ай бұрын
When will it be released for free?
@EndangeredAI
@EndangeredAI Ай бұрын
No date, but stability as usually releases the open source stuff shortly after the commercial one. Now as for “free” it’s a bit more complicated. To get the models you will need a subscription. Now whether fine tunes and checkpoints will still be bound by that license who knows. Then there’s the fact that if they do require it. How and will they enforce it. Simply put, there will highly likely be “free” models soon, but depending on how you get them they may not be “legal” Hope that answers
@mr.entezaee
@mr.entezaee Ай бұрын
@@EndangeredAI Yes, I hope it is free. Otherwise, this may be the end of the job
@starg47
@starg47 Ай бұрын
If I was Stability AI I would charge a one time fee of $20, they could do a lot better if they were funded properly.
@mr.entezaee
@mr.entezaee Ай бұрын
@@starg47 But all the popularity and making all those better models by the community was not possible by charging money..And this can cause him to fail. we will see
@Darkwing8707
@Darkwing8707 Ай бұрын
As soon as I saw that bus, I KNEW you were a Factorio player 😂 Btw, ComfyUI Essentials has a node called "Remove Latent Mask" so that you don't need to do a VAE decode and encode.
@EndangeredAI
@EndangeredAI Ай бұрын
No one can ever say that video games don’t have real life applications 🤣🤣🤣. Join my discord! I usually ping when I’m playing a game. If you’re a factorío players I’m sure our tastes intersect. I’ll check that node! Thanks!