Photoshop x Stable Diffusion x Segment Anything: Edit in real time, keep the subject

  Рет қаралды 5,780

Andrea Baioni

Andrea Baioni

Күн бұрын

Пікірлер: 59
@risunobushi_ai
@risunobushi_ai 4 ай бұрын
IMPORTANT: AFTER INSTALLING, SINCE THE NODE HAS BEEN UPDATED FROM THE VERSION I'M RUNNING, YOU NEED TO DOWNGRADE IT. OPEN THE FOLDER comfyui-Photoshop, right click, open in terminal, and run: git checkout --force 403d4a9af1f947c95367cd40ff8ad6ae65e5df41 THIS WILL DOWNGRADE THE REPO TO THE VERSION I'M USING
@nathanmiller2089
@nathanmiller2089 4 ай бұрын
What if you are using run diffusion?
@risunobushi_ai
@risunobushi_ai 4 ай бұрын
@@nathanmiller2089 I don't think there's a way of using a remote cloud based solution with local software like PS or Blender
@paultsoro3104
@paultsoro3104 7 ай бұрын
Great Video! Thank you for developing this workflow. I followed the steps and it works great! Thanks for sharing!
@ppbroAI
@ppbroAI 7 ай бұрын
Great video, ty for the effort you put into this. 👍
@AriVerzosa
@AriVerzosa 7 ай бұрын
Sub! Enjoyed the detailed explanation starting from scratch. Keep up the good work!
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
Thank you! I try to not leave anyone behind, so explaining everything takes time but it pays off in the end I think.
@elan4912
@elan4912 5 ай бұрын
It's a detailed video. Thanks a lot!!
@ChloeLollyPops
@ChloeLollyPops 7 ай бұрын
This is amazing teaching thank you!
@JavierCamacho
@JavierCamacho 7 ай бұрын
Thanks!!!! I appreciate the effort you added to this video after I asked about this. God bless you!!! I'll try it and place the watch on some ai female models .
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
I don’t touch on this in the video, but if you want to keep two subjects you can duplicate the SAM and then blend the two images and mask together so you keep both a person and a watch for example
@Sergiopoo
@Sergiopoo 7 ай бұрын
So glad I found this channel, really good info
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
Thank you for the kind words!
@Onur.Koeroglu
@Onur.Koeroglu 7 ай бұрын
Thank you for this Tutorial. Your video title matches with the information in it.. I like that 😅💪🏻 I have to try that. Photoshop meets ComfyUI sounds great. 🙂👍🏻
@SuperSarvikMan
@SuperSarvikMan 4 ай бұрын
Thank you for this great tutorial. I'm getting an error when running your workflow. It seems the IPAdapterUnifiedLoader needs ClipVision. Says "ClipVision model not found"
@SuperSarvikMan
@SuperSarvikMan 4 ай бұрын
Solved. For anyone else running into this, all the files in models/clip_vision and ipadapter have to be named the same as on Hugging Face.
@baceto-jp4fz
@baceto-jp4fz 7 ай бұрын
do you think this workflow and the pop-up will work for Photopea? (open-source Photoshop alternative) also, is it possible to run this workflow without photoshop at all? great video!
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
I’m not well versed in Photopea, but if you want a free alternative (for which you would need to develop a different workflow or wait for one, since I’d like to make one) you can look at Krita, which has a SD integration
@baceto-jp4fz
@baceto-jp4fz 7 ай бұрын
@@risunobushi_ai thanks! a video would be great!
@Mavverixx
@Mavverixx 5 ай бұрын
where can you find the image blend by mask node, Ive cloned a WAS suite repository but it failed, is there anywhere else to get it? Many Thanks
@risunobushi_ai
@risunobushi_ai 5 ай бұрын
Have you tried a “try fix” in the manager for the WAS suite? I’m not at home right now and can’t check if there’s other blend by mask nodes (I’m sure there are though)
@Mavverixx
@Mavverixx 5 ай бұрын
@@risunobushi_ai Many Many Thanks solved it, however I am now looking for how to connect my Photoshop to comfy UI node, it seems to have been upgraded. There is no password field in the node any longer, not sure how they speak to each other?
@risunobushi_ai
@risunobushi_ai 5 ай бұрын
@@Mavverixx the dev told me both nodes (old and new) should be available, but I can't find it myself in the updated repo. Anyway, you can downgrade it by using "git checkout" and then the version of the repo before it got upgraded to the new nodes.
@andree839
@andree839 7 ай бұрын
Hi, thanks for a very helpful video again. I have a one problem though appearing in the workflow. I am using the SD1.5 checkpoint model since i dont have that much VRAM. When running Segment anything, I get an error for OUT of memory. Reading the error message it seems the memory capacity is large enough, but "PyTorch limit (set by user-supplied memory fraction)" is way to high. Any suggestions how to solve this? I tried with the very small "mobile_sam" model and it actually worked, but the mask was not precise at all.
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
Yeah, Mobile SAM is not great for the kind of result we want here. Since yours is a hardware limitation issue, if you haven’t tried this yet, I would, in order: - turn off IPAdapter completely; - look for lightweight ControlNet depth models; - check if other ControlNets are more compact (e.g. if lineart has a lighter model than depth. You miss out on depth but you still get the same spatial coordinates as the photoshop picture) - reduce latent image size
@andree839
@andree839 7 ай бұрын
@@risunobushi_ai Thanks for the suggestions! I already tried most of them and even if i reduce the latent image to extremely low, I still get the error. Seems to be very hard to figure out. The entire message i get is like this "Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 2.85 GiB Requested : 768.00 MiB Device limit : 4.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB" So the strange part is that the sum of the requested memory is less than the Device limit.
@Kafkanistan1973
@Kafkanistan1973 7 ай бұрын
Well done video!
@henroc481
@henroc481 7 ай бұрын
THANK YOU!!!!
@jkomno5809
@jkomno5809 7 ай бұрын
hi! what node should replace the input from photoshop, if I want the input to be just a selected image from my local drive?
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
A load image node would be what you need
@houseofcontent3020
@houseofcontent3020 6 ай бұрын
Such good video!
@fabiotgarcia2
@fabiotgarcia2 7 ай бұрын
I can´t wait for NimaNzrii update his node to see if it work for mac.
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
They did commit something to a private repo a couple of days ago, and apparently they’re working on a new release, but they’re not one of the most communication-oriented devs out there. There’s not even proper docs tbf. Still I feel like its simplicity is unparalleled, and it’s exactly what’s needed in order to work alongside photoshop in a simple and intuitive way. so here’s to hoping they can push some more updates in the future.
@fabiotgarcia2
@fabiotgarcia2 7 ай бұрын
@@risunobushi_ai thanks for reply me
@jkomno5809
@jkomno5809 7 ай бұрын
i followed the tutorial and built your workflow from scratch but without the photoshop node as i'm on macos. i replaced it with a normal "load image" node, that gets to the resizer just as how photoshop node goes through. I get error "SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)" ... can you help me out with it? ComfyUI Manager doesn't say that I have missing nodes
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
What are you using instead of the photoshop node? A load image node? At which node does the workflow throws an error (usually the one that remains highlighted when the queue stops)?
@jkomno5809
@jkomno5809 7 ай бұрын
@@risunobushi_ai Error occurred when executing SAMModelLoader (segment anything): Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
@jkomno5809
@jkomno5809 7 ай бұрын
@@risunobushi_ai i'm running this on M1 Max 32 core GPU, 64 RAM: Error occurred when executing SAMModelLoader (segment anything): Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
Do you mind uploading your json workflow file to pastebin or any other sharing tools? I’m going to see if I can replicate the issue on my MacBook
@jkomno5809
@jkomno5809 7 ай бұрын
@@risunobushi_ai yes of course! can i have your discord or something?
@xColdwarr
@xColdwarr 7 ай бұрын
This doesnt work in Google Colab but if it does pls help me
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
I’m not versed in Google Collab, so I’m not sure whether a connection between Photoshop, which acts as a local server, would be able to work with Collab. You’d need to find a way to forward photoshop’s remote connection to the Collab I guess.
@thewebstylist
@thewebstylist 7 ай бұрын
Just showing the UI at 1:30 is why I still haven’t chosen to use Stable D
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
Well, I do try my best to explain why and how to use each and every node, to help anyone understand what they do and how they can use them easily
@zizhdizzabagus456
@zizhdizzabagus456 7 ай бұрын
The only problem is it doesn't actually blend lighting to the subject.
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
Sometimes it does, sometimes it doesn't - the solution would be applying a normal map controlnet as well, but that slows things down a bit, and normal maps extracted from 2D pictures are not great. We can only wait for better depth maps, so that the light can be interpreted better, or we can generate more pictures so that we get coherent lighting eventually. For example, sometimes it generates close to perfect shadows, whereas sometimes it doesn't. At its core, it's a non-deterministic approach to post processing, so it will always have some limitations, but going forward I expect those to become less and less impactful.
@zizhdizzabagus456
@zizhdizzabagus456 7 ай бұрын
@@risunobushi_ai does it has to be normal map? I thought depth and normal give pei much same results?
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
Long story short, the latest depth maps can do what normal maps would do, but since it’s all just an approximation of a 3D concept, we’re still not quite there for coherent *and* consistent lighting.
@zizhdizzabagus456
@zizhdizzabagus456 7 ай бұрын
@@risunobushi_ai oh you mean that if I do use the real one from 3d editor it woild make a difference?
@risunobushi_ai
@risunobushi_ai 7 ай бұрын
@@zizhdizzabagus456 it would and it wouldn’t. Normal maps derived from 2D pictures are an approximation, so they’re at best a bit scuffed. Also, apparently generative models weren’t supposed to be able to “understand” normals. For a more in depth analysis, take a look here: arxiv.org/abs/2311.17137
@brunosimon3368
@brunosimon3368 5 ай бұрын
Thanks for this wonderful tutorial. I've downloaded your json file, but it doesn't work for me. After installing all the different files, ComfyUI blocks on the IPAdapter. I get the following message : IPAdapter model not found. File "C: ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C: ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C: ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C: ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 515, in load_models raise Exception("IPAdapter model not found.") If you have any idea, you're welcome 🙂
@risunobushi_ai
@risunobushi_ai 5 ай бұрын
have you installed all the models needed for IPAdapter to work? They're on the IPAdapter Plus github github.com/cubiq/ComfyUI_IPAdapter_plus
@brunosimon3368
@brunosimon3368 4 ай бұрын
@@risunobushi_ai Thank you for your answer. In between, I've found out the way to solve this problem. My Photoshop link doesn't work, but an Load Image node works as well. Anyway, I have an issue I can't find any solution for: mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320) Do you have any idea? Thanks in advance for your time and your patience.
@brunosimon3368
@brunosimon3368 4 ай бұрын
Nevermind!!!! I redid the complete installation from scratch and it now works :-) Thanks alot for your work.
From sketch to 2D to 3D in real time! - Stable Diffusion Experimental
25:00
HELP!!!
00:46
Natan por Aí
Рет қаралды 64 МЛН
СОБАКА ВЕРНУЛА ТАБАЛАПКИ😱#shorts
00:25
INNA SERG
Рет қаралды 3,8 МЛН
Adobe's New AI Video Generator is Bonkers!
25:38
Curious Refuge
Рет қаралды 178 М.
Fun with Automated Loops in Stable Diffusion
14:01
Andrea Baioni
Рет қаралды 1,3 М.
Lora Training using only ComfyUI!!
11:15
AIFuzz
Рет қаралды 29 М.
A Professional's Review of FLUX: A Comprehensive Look
22:51
Andrea Baioni
Рет қаралды 11 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 372 М.
Easy Inpainting for ANY model (SDXL, Flux, etc)
13:06
Andrea Baioni
Рет қаралды 3,3 М.
This is the Difference of Gaussians
19:03
Acerola
Рет қаралды 259 М.
How to train simple AIs to balance a double pendulum
24:59
Pezzza's Work
Рет қаралды 271 М.
Relight and Preserve any detail with Stable Diffusion
19:02
Andrea Baioni
Рет қаралды 17 М.
Новый iPhone 👍 @JaySharon
1:07
История одного вокалиста
Рет қаралды 2,4 МЛН
Apple phone #shorts #trending #viralvideo
0:48
Tech Zone
Рет қаралды 676 М.