LATENT Tricks - Amazing ways to use ComfyUI

  Рет қаралды 123,384

Olivio Sarikas

Olivio Sarikas

Күн бұрын

Пікірлер: 169
@jorgeantao28
@jorgeantao28 Жыл бұрын
This is an amazing tool for professional artists. The level of detail you can achieve reminds me of Photoshop... AI art is not a threat to artists, but rather a complement to their work.
@DJVARAO
@DJVARAO Жыл бұрын
Man, you are a wizard. This is a very advanced use of SD.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Thank you very much :)
@JonathanScruggs
@JonathanScruggs Жыл бұрын
The more I play with it, the more I'm convinced that this is the most powerful UI to Stable Diffusion there is.
@dejansoskic-m2p
@dejansoskic-m2p Жыл бұрын
the moment HOUDINI mlops is updated to be able to use loras,and lycorish its going to be the most powerfull
@andresz1606
@andresz1606 Жыл бұрын
This video is now #1 in my ComfyUI playlist. Your explanation at 17:50 on the LatentComposite node inputs (samples_to, samples_from) is priceless, as the rest of the video. Looking forward to ask some questions in your Discord channel.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
#### Links from the Video #### Join my Discord: discord.gg/XKAk7GUzAW ComfyUI Projects ZIP: drive.google.com/file/d/1MnLnP9-a0Pif7CZHXrFo-pAettc7KAM3/view?usp=share_link ComfyUI Install Guide: kzbin.info/www/bejne/rIa3h2treZpkr80
@MrGTAmodsgerman
@MrGTAmodsgerman Жыл бұрын
The node system can make things complicated, but this system really empowers the potential of a lot of stuff. And seeing this for AI pictures now, gives it more meaning and control that could be then be considered as artistic again, as with ComfyUI the human takes huge control.
@KyleandPrieteni
@KyleandPrieteni Жыл бұрын
YES have you seen the custom nodes on civit AI? They are nuts and you get even more control
@MrGTAmodsgerman
@MrGTAmodsgerman Жыл бұрын
​@@KyleandPrieteni Actually no, i haven't. Thanks for the info.
@bjornskivids
@bjornskivids Жыл бұрын
Ok, this is awesome. You inspired me to make a 4-sampler comparison-bench which lets me get 4 example pics from one prompt when exploring different engines. It makes sampler/settings comparisons simple and I can crank out sample pics at a blistering pace now. Thank you :)
@mrjonlor
@mrjonlor Жыл бұрын
Very cool! I’ve been playing with latent composition in ComfyUI for the past couple days. It gets really fun when you start mixing different art styles within the same image. You start getting some really wild effects!
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Thank you. That's a great idea too. I was thinking about using different models in the same image, but then thought that might be too complex for this video
@workflowinmind
@workflowinmind Жыл бұрын
Great examples, the first one you should pipe the primary latent into the subsequent ones as you are over stepping at each image (last image has all the previous steps in your example)
@bonecast6294
@bonecast6294 Жыл бұрын
could you possibly explain it in more detail or provide a nose setup? is his node setup not correct?
@AllYouWantAndMore
@AllYouWantAndMore Жыл бұрын
I asked for examples, and you delivered. Thank you.
@___x__x_r___xa__x_____f______
@___x__x_r___xa__x_____f______ Жыл бұрын
Found this particular course super inspiring. Makes me keen to experiment
@lovol2
@lovol2 Жыл бұрын
Okay I'm convinced, I will be trying this out, fantastic demo
@LICHTVII
@LICHTVII Жыл бұрын
Thank you! Hard to find a no-bs explanation of what-what does, helps a lot!
@JimmyGunawan
@JimmyGunawan Жыл бұрын
Great tutorial on ComfyUI! Thanks Olivio~ I just started using this today, reloading the "workflow" really help with efficiency.
@CrimsonDX
@CrimsonDX Жыл бұрын
That last example was insane O_O
@Kyvarus
@Kyvarus Жыл бұрын
The only thing I wish that comfy had, is the ability to sequentially take frames from a video in order to use them as an open pose mask for each generation over time. Video generation would be amazing.
@Dizek
@Dizek Жыл бұрын
im new, but can you select a folder of images? you could pre-split the images and use them
@Kyvarus
@Kyvarus Жыл бұрын
@@Dizek There is no way for you to within comfyui control the selection of images in a sequential order. which means that you can only have a static reference image and no one has bothered to program in a way for us to load in multiple images from a folder in order yet. Honestly if i get the time this week i'll throw the script together. the addons for comfy UI are very powerful and it's likely to not be a big issue. the main issue is that we need the end of image generation event to call the next image to load. which will require someone to go learn the api for the software. So even if you have some presplit images into a folder there is no way to call the next image in the folder by index.
@anuroop345
@anuroop345 Жыл бұрын
@@Kyvarus We can save the workflow in API format, then use python script to input image in sequence, save output images, and later combine them.
@Kyvarus
@Kyvarus Жыл бұрын
@@anuroop345 never heard of using the saved workflow files as an api format for python scripts, but that sounds really quite nice, something along the lines of "Break down loaded video into input frames, standardize the input frame size, decide the fps of the final render, then pick an appropriate number of frames, load up the workflow api, enter in the input picture, model selection, loras, prompt, etc and run per image. in a for loop for number of images. Recompile mp4 from folder image sequence; done?" I guess this could also be used to compile open pose videos from standardized characters acting in natural video. Which would be great; allowing more natural posing without the artifacts over video of other control net types.
@Dizek
@Dizek Жыл бұрын
wow, discovered Comfy just recently but it is more than it looks, you can even use the same prompt into all the aviable samplers to test the ones that work the best with the style you are going for
@petec737
@petec737 Жыл бұрын
The latent upscaler is not adding more details as you mentioned, it's using the nearest pixels to double the size (as you picked), similar to how you'd resize an image in photoshop; the ksampler is the one who adds more details. That's a confusion I see many making. For best quality you don't upscale the latent, you upscale the image with the upscalemodelloader then pas it through the ksampler.
@bobbyboe
@bobbyboe 11 ай бұрын
I wonder then what a latent-upscaling is useful for?
@stephancam91
@stephancam91 Жыл бұрын
Awesome video - very educational - thank you! I've been meaning to get ComfyUI installed - just have to find the time. (I swear, I'm having to update my AI skills weekly - it's nearly as time consuming as keeping up with Unreal Engine, lol).
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Thank you very much. ComfyUI is a blast to play with. This will suck up your hours like nothing 😅
@jeremykothe2847
@jeremykothe2847 Жыл бұрын
The good news is it's easy to install. The "bad" news is that it really needs more functionality to be useful, but it as a lot of promise if it's extended. If they managed to get the community to write nodes for them....
@Mimeniia
@Mimeniia Жыл бұрын
Waaaaaaay easier and quicker than Auto1111 to install...but a bit more intimidating to use on an advanced level.
@stephancam91
@stephancam91 Жыл бұрын
@@Mimeniia Thanks so much. I'm used to using node based programs (DaVinci Resolve + Red Shift). Hopefully, I'll be able to pick it up quickly! Just a matter of finding the time.
@andrewstraeker4020
@andrewstraeker4020 Жыл бұрын
Thank you for your excellent explanations. I especially appreciate your excellent English, which is understandable even for non-native speakers.😸 Every time I watch your videos, I want to run and experiment. New ideas and possibilities every time. 😺👍👍👍
@TSCspeedruns
@TSCspeedruns Жыл бұрын
ComfyUI is amazing, I love it
@OlivioSarikas
@OlivioSarikas Жыл бұрын
thank you :)
@pkrozie
@pkrozie Ай бұрын
Thank you for this video, Amazing work.
@panzerswineflu
@panzerswineflu Жыл бұрын
I'm a sea of ai videos i started skimming through, this got my subscribe. Now if only I had a rig to play with this stuff
@remzouzz
@remzouzz Жыл бұрын
Amazing video ! Could you also make a video where you get more in depth onto how to install and use controlnets in ComfyUi ?
@caiubymenezesdacosta5711
@caiubymenezesdacosta5711 Жыл бұрын
Amazing, i will try it this weekend. Like always thanks for share with us .
@silentwindyou
@silentwindyou Жыл бұрын
This method seems similar to a sequence of [from:to:when] prompts in webUI, the steps added up, and added image output after each prompt with custom steps finished,nice process!
@TesIaOfficial.on24
@TesIaOfficial.on24 Жыл бұрын
Never heard about that in the WebUI. Is this possible?o.o
@silentwindyou
@silentwindyou Жыл бұрын
@@TesIaOfficial.on24 cause [from:to:when] also applied in Latent space, so the logic applies,but webUI output the result from last step not after each [from:to:when] prompt by default.
@rakly347
@rakly347 Жыл бұрын
This UI needs some Factorio influence, it's so chaotic!
@METALSKINMETAL
@METALSKINMETAL Жыл бұрын
Excellent, Thank so much for this video!
@Vestu
@Vestu Жыл бұрын
I love how your ComfyUI setup is not overly OCD but a "controlled noodle chaos" like mine are :)
@DezorianGuy
@DezorianGuy Жыл бұрын
I appreciate your work, but can you make a video in which you share the basic working process - I literally mean a step by step guide. In your 2 released videos about ComfyUI I barely understood what you were talking about or what nodes are connected to which (looks like spaghetti world to me). If you could just create single projects from the start.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
hm... that could be a interesting idea. In the meantime, the best way to go about this is to look at A1111 and compare the individual parts to the nodes in ComfyUI, because they are often similar or the same. Like the Empty Latent Image is simple the size setting you have in A1111. Or the k-smapler is just the Render settings in A1111, but with some more options in there
@DezorianGuy
@DezorianGuy Жыл бұрын
@@OlivioSarikas i finally managed to replicate your project now, was a bit confusing at first. Those checkpoint files one can choose from do provide different art styles?
@lovol2
@lovol2 Жыл бұрын
I think if you've not used automatic 1111 before looking at this. Your head will explode ! It will be worth the time and effort to install automatic 1111 and then you will be familiar with all of the terms he is using here, and also see the power of all the mess and chaos of the little lions flying over the place
@wolfganggriewatz3522
@wolfganggriewatz3522 Жыл бұрын
I love it. Do u have plan on more of this?
@alexlindgren1
@alexlindgren1 Жыл бұрын
Nice one! I'm wondering if it's possible to use Comfy UI to change the tint of an image, let say I have an image of a livingroom, and I want to change the tint of the floor in the livingsroom based on an image I have of another floor, how would you do that?
@benjamininkorea7016
@benjamininkorea7016 Жыл бұрын
Having a lineup of beautiful girls of different races like this is going to make me fall in love about 10 times per hour I think. Fantastic work as always!
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Thank you very much. Yes this is great to show the beauty of different ethnicities :)
@rsunghun
@rsunghun Жыл бұрын
you are so smart and amazing!
@kennylex
@kennylex Жыл бұрын
I see that you use things like "RAW Photo", "8k uhd DSLR" and "High quality" that I often say are useless prompts that do not do what folk think they will do, like RAW is just uncompressed data that later can be converted, so you do not want that in a image then it give flat colors, what folk often want is a stule like "Portrait Photo" that often is a color setting in cameras. BUT! My idea is if you can use the nodes to make images that is side by side but where "RAW photo" is compared with image that do not have that prompt or replace it with other prompts like "Portrait photo". "warm colors" and "natural color range", with nodes you can make sure you get same seed and so and that the result are almost made at the same time. For when you write "high quality", what to you want? For the AI can not make higher graphical quality than it can do, but I guess it change something then so many use that prompt, so can you just do some test to see what the most popular prompts do, like is Trending on Artstation" better than "Trending on Flickr" or "Trending on e621"? Edit: This is a tips for all, rather than write "African woman" use a nationality like "Kenyan woman" to get that that nice skin tone and great looking females, if you take nations down south you get that rounder face on males that can give a rather cool look, nations in the north Africa have a lighter skin tone and often a arabic or ancient roman look.
@Spartan117KC
@Spartan117KC Жыл бұрын
Great video as always, Olivio. I have one question - you say at 9:28 that 'you can do all of this in just one go' - where you referring to the 4x upscale with less detail that you had already mentioned or were you referring to another way to do the latent upscale workflow with better results and less steps?
@roroororo7088
@roroororo7088 Жыл бұрын
I like videos about this UI, can you do exemples for clothes changing plz ? (it's harder and inpaint like but more friendly to use)
@enriqueicm7341
@enriqueicm7341 11 ай бұрын
It was useful!
@digitalfly73
@digitalfly73 Жыл бұрын
Amazing!
@shareeftaylor3680
@shareeftaylor3680 4 ай бұрын
Can u please show us how to use taesd vae decoder node I can't find any video on this anywhere 😢
@darmok072
@darmok072 Жыл бұрын
How did you keep the image consistent when you did the latent upscale? When I try your wiring the face of the upscaled image is quite different?
@lisavento7474
@lisavento7474 11 ай бұрын
ANYTHING YET to fix wonky faces in Dall-e crowds? I have groups of monsters! I've tried prompts like "asymmetrical, detailed faces" and it did a little better but i have perfect images except the crowds in the background that i need to fix.
@dxnxz53
@dxnxz53 6 ай бұрын
bester mann!
@im5341
@im5341 Жыл бұрын
5:30 I used same flow but instead of KSampler I put KSampler Advanced at second and third stage. 1st KSampler:steps:12, | 2nd KSampler Advanced:start_at_step:12, steps:20 | 3rd KSampler Advanced:start_at_step:20, steps:30
@PaulFidika
@PaulFidika Жыл бұрын
Olivio woke up this morning and chose violence lol
@arnaudcaplier7909
@arnaudcaplier7909 Жыл бұрын
Hi @OlivioSarikas, let me share what I think: I have been working in the domain of creative intelligence (orginally CNN based) since 2017, and your insights are solving problems that I have been facing for years ... just crazy staff ❤‍🔥, you are an absolute genius! Great respect for your work .Thank you for the insane value you share with us 🙏
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Thank you very much :)
@toixco1798
@toixco1798 Жыл бұрын
it's the best UI, but I don't think its creator is the kind of person to seriously maintain it, and I think he did it more for fun or curiosity before surely moving on
@MishaJAX_TADC5
@MishaJAX_TADC5 Жыл бұрын
@OlivioSarikas Hi, can you explain, when i am use Latent Upscale, my smaller image is converted to a different image, do you have any idea how to fix it, or is there something wrong with what I'm doing?
@jeffg4686
@jeffg4686 9 ай бұрын
Trying to understand what a latent consists of for a previous image. Like, I can see that somehow it's still using the seed or something. Assuming the seed itself is stored in the latent or something? Any thoughts? Update: nm on this actually. I see that it likely just holds that as part of the "graph", and the next one has access to it because it's part of the branch that led up to it (guessing)
@Ibian666
@Ibian666 Жыл бұрын
How is this different than just rendering the same image with a single word changed? What's the benefit?
@MAKdotCZ
@MAKdotCZ Жыл бұрын
Hi Olivio, I wanted to ask you if you could give me some advice. I have been using SD AUTOMATIC1111 so far and now I am trying ComfyUI. And my question: is there any possibility to push a prompt and settings to ComfyUI from the images generated by SD A1111? In SD A1111 I use PNG INFO and then send TXT2IMG. Is there any similar way to do this in ComfyUI but from the image I generated in SD A1111 ? Thank you very much, MAK
@pratikkalani6289
@pratikkalani6289 Жыл бұрын
I love comfyui, this has so many use cases. I’m VFX compositor by profession so I’m very comfortable with node base ui (I work on Nuke). I wanted to know if we want to use comfyui as a backend for website, can I run this on serverless GPU?
@HN-br1ud
@HN-br1ud Жыл бұрын
잘 보았습니다~감사합니다^^
@MaximusProxi
@MaximusProxi Жыл бұрын
Hey Olivio, hope your new PC is up and running now!
@OlivioSarikas
@OlivioSarikas Жыл бұрын
yes, it is. It really was the USB-Stick that was needed. Didn't connect the front RGB yet though ;)
@MaximusProxi
@MaximusProxi Жыл бұрын
Glad to hear! Enjoy the faster rendering :)
@LeKhang98
@LeKhang98 Жыл бұрын
Awesome channel. I have 2 questions please help: - Is there any way to import real-life images of some objects (such as cloth, watch, hat, knife, etc.) into SD? - Do you know how to keep these objects consistent? I know about making consistent characters but it works for facial and hair only while I want to know how to apply that to objects. (Example: 1 knife with multiple different girls and different poses)
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Thank you :) - Yes you can do that in comfyUI with the image loader - if you want a model that is trained on a object you would need to create a lora or dreambooth model
@krz9000
@krz9000 Жыл бұрын
Create a lora of your thing you want to bring into your shot
@LeKhang98
@LeKhang98 Жыл бұрын
@@OlivioSarikas ​ @Chris Hofmann Thank you. I'm not sure if it can work with clothes, though. I have some t-shirts and pants with logos, letters, or images on the front. Depending on the pose of different characters, the t-shirt, pants, and their images will change accordingly. That's why I'm hesitant to learn how to use AI tools since I don't know if I could do it or if I should just hire a professional photographer and model to do it the traditional way. Anyway, I do believe that in the near future, everyone should be able to do it easily. This is so scary & exciting.
@DemonPlasma
@DemonPlasma Жыл бұрын
where do i get the RealESRGAN upscaler models?
@amva3455
@amva3455 Жыл бұрын
With comfyUI is possible train my custom models like dreambooth? or is just to generate images?
@maadmaat
@maadmaat Жыл бұрын
I love this UI. can you also do batchprocessing and use skripts with this already? Creating animations with this workflow would be really convenient.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Thank you. Not yet, unless you build a series of nodes. I really hope patch processing and looping is coming soon
@miasik1000
@miasik1000 Жыл бұрын
Is there a way to set upscale factor? 1.5;2...
@void2258
@void2258 Жыл бұрын
Any way to variable this? I ask because a well known issue with this kind of repetition is the accidental forgotten/mistaken edit breakage. When you have to edit in a bunch of different places, you can either forget one or more or make one or more mistakes between them and break the symmetry. Being able to feed it "raw portrait...of a X year old Y woman..." and write the rest of the prompt 1 time would make this more easily handled. Also, in theory, you can produce the latent WITHOUT the X and Y filled in and add that on at each, so feed them all from a single latent instead of chaining, though not sure if that would work. Similar to the second thing you did but more automatic. I am speaking from a coders perspective and am not sure if any of this is sensible or not.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
ComfyUi is still in early development. Most nodes need more in/out-puts and there are more nodes needed. So, for now things are rather complex and you need to have dublicate nodes for all the new steps you want to do, instead of being able to rout it through the same node several times. I'm not sure how you imagine combining different latent images without the x/y setting. if the latent image you provide is smaller it will stick to the top left corner. If it is not smaller it will simpley replace the latent image you put it on top of. so it needs to be smaller, as there is not latent image weight that can be used to mix the strength and no mask to maks it out - that would be a different process (the one i showed before)
@mickeytjr3067
@mickeytjr3067 Жыл бұрын
One of the things I read in the tutorial is that "bad hands" doesn't work, while (hands) in the negative will remove bad hands.
@matthewjmiller07
@matthewjmiller07 Жыл бұрын
How can I set up these same flows?
@teslainvestah5003
@teslainvestah5003 Жыл бұрын
pixel upscale: the upscaler knows that it's upscaling white rounded rectangles. latent upscale: the upscaler knows that it's upscaling teeth.
@beardedbhais4637
@beardedbhais4637 Жыл бұрын
Is there a way to add Face restoration to it?
@ryanhowell4492
@ryanhowell4492 Жыл бұрын
Cool Tools
@OlivioSarikas
@OlivioSarikas Жыл бұрын
I love comfyui :)
@paulopma
@paulopma Жыл бұрын
How do you resize the SaveImage nodes?
@Avalon19511
@Avalon19511 Жыл бұрын
Olivio a question how would I go about putting my face on a image without training, besides photoshop of course or is training the only way?
@OlivioSarikas
@OlivioSarikas Жыл бұрын
Why not do the lora training? it''s very easy and fast.
@Avalon19511
@Avalon19511 Жыл бұрын
@@OlivioSarikas Does A1111 recognize image links like midjourney?
@dax_prime1053
@dax_prime1053 Жыл бұрын
this looks ridiculously complex and intimidating.
@chinico68
@chinico68 Жыл бұрын
Will it run on Mac??
@linhsdfsdfsdfds4947
@linhsdfsdfsdfds4947 Жыл бұрын
Can yopu share this workflow?
@digwillhachi
@digwillhachi Жыл бұрын
not sure what im doing wrong as i can only generate 1 image the others dont generate 🤷🏻‍♂
@EchoBycz
@EchoBycz Жыл бұрын
:) looks familiar :D
@pmlstk
@pmlstk Жыл бұрын
have you figured out how to redirect the models folder to your existing automatic1111 model folder? that's way too much GB for duplicate files
@benjaminmiddaugh2729
@benjaminmiddaugh2729 Жыл бұрын
I don't remember what Windows calls it, but the Linux term you want is "symlink." You can make a virtual file or folder that points to an existing one (a "soft" link) or you can make it so the same file/folder is in multiple places at once (a "hard" link - soft links are usually what you want, though).
@ajartaslife
@ajartaslife Жыл бұрын
Can comfyui batch img to img for animation?
@animelover5093
@animelover5093 Жыл бұрын
sigh .. not available on Mac at the moment : ((
@NoPr0gress
@NoPr0gress Жыл бұрын
thx
@arturabizgeldin9890
@arturabizgeldin9890 Жыл бұрын
I'll tell you what: you're a natural born tutor!
@Silversith
@Silversith Жыл бұрын
The latent upscale randomised the output too much from the original for me, especially if it's a full body picture. I've output the latent upscale before sending it through the model again and it basically just redcues the quality more before reprocessing it. I ended up just passing it through the model twice to upscale it.
@Silversith
@Silversith Жыл бұрын
Tomorrow I'm gonna try tweaking the code a bit or including some custom nodes to pass the seed from one to the next so it stays consistent and does a proper resize fix
@OlivioSarikas
@OlivioSarikas Жыл бұрын
I latent upscale you have different upscale methods. Give them a try and see if that changes your result to what you need
@Silversith
@Silversith Жыл бұрын
@@OlivioSarikas I submitted a pull request that passes the seed value through to the next sampler. Seems to work well 🙂
@Dizek
@Dizek Жыл бұрын
@@OlivioSarikas or better, create different nodes with all the aviable upscale methods and try all at once
@blacksage81
@blacksage81 Жыл бұрын
Yeah, it isnt easy to get black people calling it that way, I've found that using Chocolate, or Mocha colored skin, and other brown colors will get the skin, in my limited testing the darker colors will help the characters gain more African features.
@GiggaVega
@GiggaVega Жыл бұрын
Hey Olivio, this was an interesting tool, but I really don't like the layout, it's too all over the place. Sorry to spam you but I tagged you in a video I just uploaded to youtube about: Why I don't Feel Real Artists have anything to worry about regarding Ai Art Replacing them. Feel free to leave your thoughts on that topic. Maybe a future video? Cheers from Canada bro.
@redregar2522
@redregar2522 Жыл бұрын
for the 4 girls example i have the issue that the face of the first image is always messed up(rest of images is fine. Someone an idea or the same issue?
@OlivioSarikas
@OlivioSarikas Жыл бұрын
might be because you render it low res. If you upscale it, it should be fine. Or try more steps on the first image, or try a loop on the first image to render it twice
@benjamininkorea7016
@benjamininkorea7016 Жыл бұрын
I have a question-- in A1111, I can inpaint masked only. I like this, because I can inpaint on a huge image (4K) and get a small detail added but it doesn't explode my GPU. Can you think of any way to do this in ComfyUI?
@OlivioSarikas
@OlivioSarikas Жыл бұрын
I'm not sure if comfyui has "mask-only" inpainting yet.
@Max-sq4li
@Max-sq4li Жыл бұрын
You can do it in auto1111 with (only mask) feature
@OlivioSarikas
@OlivioSarikas Жыл бұрын
@@Max-sq4li that's what he said. but the question was how to do it in comfyui
@benjamininkorea7016
@benjamininkorea7016 Жыл бұрын
@@OlivioSarikas Since I watched this video and started using ComfyUI more, I figured you'd have to make the mask in Photoshop (or something) anyway, so probably wouldn't be worth it until they can integrate a UI mask painter. So I tried working with a 4K image and using the slice tool in Photoshop instead of a mask, and just exporting the exact section I want to work on. Then I can inpaint what I want, but with the full benefit of the entire render area. Working on just a face in 1024x1024 makes things look so amazing, and the ouput image snaps perfectly back into place in Photoshop. At that resolution, I can redo each eye, or even parts of the eye, with very high accuracy.
@blisterfingers8169
@blisterfingers8169 Жыл бұрын
So there's no tools for organizing the nodes yet, I take it? xD
@OlivioSarikas
@OlivioSarikas Жыл бұрын
not sure what you mean by that. you can move them and group them if you want
@jurandfantom
@jurandfantom Жыл бұрын
Even simple spread out would help. I think that shows person who was working with node base system from those without experience. But I see you have middle point to spread connector so it's not that bad
@blisterfingers8169
@blisterfingers8169 Жыл бұрын
@@OlivioSarikas I use node systems like this a ton and I've never seen a messier example. No big deal, just makes me assume the organisation tools aren't quite there yet.
@dsamh
@dsamh Жыл бұрын
Olivio. Try Bantu, or Somali, or other specific culture or peoples rather than referring to races by color. It gives much better results.
@MisakaMikotoLuv
@MisakaMikotoLuv Жыл бұрын
tfw you accidentally put the bad tags into the positive input
@LouisGedo
@LouisGedo Жыл бұрын
👋
@jasonl2860
@jasonl2860 Жыл бұрын
seems like img2img, what is the difference? thanks
@str84wardAction
@str84wardAction Жыл бұрын
this is way to advence to process whats gong on here
@zengrath
@zengrath Жыл бұрын
ugh another app that only works on Nvidia or CPU only. My 7900 xtx would really like to try some of these new things.
@scriptingkata6923
@scriptingkata6923 Жыл бұрын
why new stuff should be using amd lol
@jeremykothe2847
@jeremykothe2847 Жыл бұрын
When you bought your 7900 xtx, were you aware that nvidia cards were the only ones supported by ML?
@zengrath
@zengrath Жыл бұрын
@@jeremykothe2847 Everything i read when doing my research indicated it also worked with AMD, at least on Linux with support coming to windows, even if that wasn't the case I still wouldn't support Nvidia, now with how they are treating their business partners in same way Apple does these days, by forcing them to say only good things about them or withhold review samples which they already done over and over, not to mention the things they are doing to their manufacturing partners as well. However what I didn't know before buying the card is that 7900 xtx doesn't work even on Linux and it appears AMD could be months away or more from updating ROCM for RDNA3. All the AMD fanboys acted like it wasn't an issue at all and so on. I've even had long arguments with AMD users claiming I just don't know what I am doing, yet i've spoken with several developers now trying to see if they can walk me through getting their stuff working on AMD on Linux and sadly. they confirm we have to wait. At least on Windows a program called Shark is making incredibly strides in doing various tasks like img generation and even language models and hopefully it's only a short time before most common features are working and can compete with other platforms that only support NVidia but makes me wonder if they can do it, why can't others and why do they continue to only use protocols that support only Nvidia yet anytime something comes out that uses more open platforms for AMD, Nvidia users can also use it with no issue. How is it fair that AMD consumers can't touch products made exclusively for Nvidia but Nvidia users can go other way. it's same stuff with Steam's Index/Oculus vs Meta, Meta buys up all the major VR dev's,, kills the VR market as a result by segmenting it to death, lies when they bought the crowd sourced open Oculus tech saying they would keep it open and not require facebook accounts but they did anyway and all kickstarter people can't do anything about it now, facebook has too much money and can do whatever they want. Yet when games come out on steam only, people with Meta or any other headset can come to steam and play games with no issue. It's incredibly unfair and only reason this situation keeps happening over and over again is because the public allows it. And it's publics fault when these horrific companies end up forming monopolies then taking over the world one day as described in most sci-fi novels.
@GyroO7
@GyroO7 Жыл бұрын
Sell it and buy Nvidia one Amd is useless in anything other than gaming (even that has poor ray tracing and no dlss)
@zengrath
@zengrath Жыл бұрын
@@GyroO7 Not true at all i, i really hat fanboys on both sides who lie. Your not different then republican and democrats who fight over bullshit and constantly lie and twist facts. I have been using ray tracing with no issue, and AMD doesn't have DLSS they have FSR which works very well with FSR 3.0 coming soon that will work vary similiar to DLSS as well. And I get to enjoy the fact that I am not part of the crowd of ignorance supporting hateful practices of Nvidia. I was an Nvidia fan for about 20 years until what they have done in just the past few years, clearly you haven't been keping up. Let me guess, you probably also love facebook Meta and love Apple products too. you like companies who tell you how to think and how to use their products and if you don't like it then they tell you your stupid and any reviewers who don't praise them like gods gets put on thier ban lists.
@sdjkwoo
@sdjkwoo Жыл бұрын
24,000 STEPS? MY PC STARTED FLING IS THAT NORMAL??
@kallamamran
@kallamamran Жыл бұрын
More like UnComfyUI ;)
@michaelphilps
@michaelphilps Жыл бұрын
ja wohl!
@spider853
@spider853 Жыл бұрын
I don't really understand how LatentComposition works without a mask
@OlivioSarikas
@OlivioSarikas Жыл бұрын
it combines the two noisees and since they are both just noises yet, they can melt into a final image in the later render. however, because the noise of your character has a different background, you will often see that the background around the character is different than the background of the rest of the image by a bit
@spider853
@spider853 Жыл бұрын
@@OlivioSarikas I see, it's kind of a average, it will benefit from a mask
@Noum77
@Noum77 Жыл бұрын
This is too complicated
@OlivioSarikas
@OlivioSarikas Жыл бұрын
you take the things i showed in the video and simplify them. start by just rendering a simple AI image with a prompt and then you can add things to that
@akratlapidus2390
@akratlapidus2390 Жыл бұрын
In Midjouney you won't be able to show a black woman, because the word "black" is banned. It's one of the reasons why I take much attention to your advices about Stable Diffusion. Thanks!
@criminaldesires
@criminaldesires Жыл бұрын
That's not entirely true. I use it all the time.
@OlivioSarikas
@OlivioSarikas Жыл бұрын
stop spreading misinformation. I just tried "black woman --v 5"and it worked perfectly
@Kaiya134
@Kaiya134 Жыл бұрын
No disrespect to your work, but the concept itself is just sickening. These pictures are basically a window into the future of webcam filters. Our life is rapidly becoming a digital shitshow.
@Ахриманстоппер
@Ахриманстоппер Жыл бұрын
can you download another picture, not connected to cyberpunk, lets say "fatima diame" photo, and make a kind of corelation 50% so youre character to change some kind of rational way - become a black woman athlet with fantastic body but in cyberpunk view?
@Ахриманстоппер
@Ахриманстоппер Жыл бұрын
they can do all this changes at videos too wright? changing faces, emotions etc😂 in cia etc, as a media wars.
@Ахриманстоппер
@Ахриманстоппер Жыл бұрын
that is why putin is allways so unhappy in youtube😂
@nikolesfrances1532
@nikolesfrances1532 11 ай бұрын
Whats your discord?
LORA + Checkpoint Model Training GUIDE - Get the BEST RESULTS super easy
34:38
1000 Prompts in 1 Click - Dynamic Prompt Wildcards for Automatic 1111
12:21
Real Man relocate to Remote Controlled Car 👨🏻➡️🚙🕹️ #builderc
00:24
Trick-or-Treating in a Rush. Part 2
00:37
Daniel LaBelle
Рет қаралды 47 МЛН
Noodles Eating Challenge, So Magical! So Much Fun#Funnyfamily #Partygames #Funny
00:33
Master AI image generation - ComfyUI full tutorial 2024
1:18:44
AI Search
Рет қаралды 89 М.
L2: Cool Text 2 Image Trick in ComfyUI - Comfy Academy
13:08
Olivio Sarikas
Рет қаралды 87 М.
10 AI Animation Tools You Won’t Believe are Free
16:02
Futurepedia
Рет қаралды 400 М.
Why Unreal Engine 5.5 is a BIG Deal
12:11
Unreal Sensei
Рет қаралды 1 МЛН
Understanding Prompting for Stable diffusion in ComfyUI
29:38
Code Crafters Corner
Рет қаралды 12 М.
Why ComfyUI is The BEST UI for Stable Diffusion!
19:27
Olivio Sarikas
Рет қаралды 90 М.
BETTER than PROMPTS - The Future of AI Composition
9:23
Olivio Sarikas
Рет қаралды 135 М.
Generate AI Rendering with Blender ComfyUI AddOn
19:54
Gioxyer
Рет қаралды 13 М.
ComfyUI Crash Course 2024 (Part 2 of 3)
17:04
Professor Lich
Рет қаралды 708
Real Man relocate to Remote Controlled Car 👨🏻➡️🚙🕹️ #builderc
00:24