Great tutorial. I like your step by step style in constructing the workflow and your explanation. Thank you!
@mattc35107 ай бұрын
You are a pro. I love your videos thank you
@Im_JustKev7 ай бұрын
Great work. I hope people will start noticing your channel. 😀
@GraftingRayman7 ай бұрын
I hope so too!
@湯鎮隆7 ай бұрын
Nice tutorial! I've learned a lot!
@adriantang58117 ай бұрын
Very useful tutorial, thank you so much!
@GraftingRayman7 ай бұрын
Glad it was helpful!
@ruralsavior7218Күн бұрын
Just can't get the ipadapter InsightFaceID loader to load file. Followed the IPadapter file locations but still only gives me Buffalo and Antelope. Frustrating
@SriLanka-i2g3 ай бұрын
I get this error all the time"ERROR lora diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_v.weight shape '[640, 640]' is invalid for input of size 1638400" can you tell me please how to fix this?
@baseerfarooqui58976 ай бұрын
great for learning
@deathfxu4 ай бұрын
Why did you put 3 parentheses around part of the prompt? What does that do?
@PierreHenriBessou7 ай бұрын
Very nice video. Useful tips. Still got a few questions, if you don't mind. Is there any reason why you use the nodes [ loraLoader Model Only + IPAdapter Model Loader + clipVision + insightFace ] instead of the IPAdapter Unified loader FaceID? For the face reference, I see you use a batch load, but do not use a "Prep Image For ClipVision". I've seen this node used in many workflows. But maybe you did prepare manually your reference images. Did you make something special to your dataset, like resizing them or cropping them in the first place? Anyway, I used to run a second Ksampler at 0.5 denoising. Didn't think about running a third one. I'll try that out, nice idea. Thanks again, good job.
@GraftingRayman7 ай бұрын
The unified loader sometimes does not work for me, not sure why, tried a few different workflows and they seem to crap out, swapped for the standard version which works a treat. I use crop face node prior to this workflow to save the faces only.
@n3bie7 ай бұрын
I've been trying to use this with SDXL and I'm finding the images don't seem to be coming out with faces that look much like my reference pictures. Had 3 quick questions: 1. When you said SDXL wasn't working well for you in the video, did you mean just in general or in the case of this specific workflow in that the faces didn't seem to come out? 2: Do the reference images need to be close ups of the face, or will this work with full body reference photos as well? They can be jpegs right? 3: If I'm only making an image of a single person, it should be okay to use a mask that is just a transparent png image without the black part right? EDIT: Actually, I think I have it working better now, just needed to adjust some of the models I was using... think I had the wrong IPAdapter model selected hehehe eeer.. I'm not the smartest. Thanks again!
@GraftingRayman7 ай бұрын
Hi @n3bie, reference images work best if they are head shots. Not been having much luck with multi person generations with sdxl, works fine for single person but soon as I get a 2nd or 3rd person involved, it craps out, but that is using InstantID, works fine with FaceID
@n3bie7 ай бұрын
@@GraftingRayman Ah I see, I actually only built a single character workflow, but it's nice to have that info for when I try to expand it. I have to say tho, this workflow as a single character generator is working fantastically for me using SDXL. I deepened the reference images to about 20, maybe a third of those are close up portraits as you suggested, but after I was having trouble generating anything but close ups of the face, I threw a bunch more in the folder including full body poses, and I'm having pretty good results generating a lot of different poses that all look like the reference model. If anybody comes across this, I'm using Darker Tanned Skin off of CivitAI for a SDXL compatible LoRA, and it's working quite nicely. Thanks again Rayman!
@DaCashRap6 ай бұрын
This is a very useful tutorial and you make an impression of someone who's proficient in this field of work. Being an author of the "GR Prompt Selector" node included in the workflow, could you provide instructions on how to get it running? I've seen multiple people having problems with it down in the comments. Cloning the repo into the "custom_nodes" folder obviously isn't enough for some reason.
@GraftingRayman6 ай бұрын
Not really a coder, from what I noticed, the requirements were missing for some, I have added a requirements.txt file in the repo not too long ago, this can be installed using "pip install -r requirements.txt" in the nodes root, this will fix most if not all issues.
@DaCashRap6 ай бұрын
@@GraftingRayman I've done that already. ComfyUI is still unable to load the nodes. In one of your replies to a different comment you talked about having a "clip" folder in "python_embeded". How can that be achieved?
@GraftingRayman6 ай бұрын
a lot of people have a system python and a python in the portable version of comfyui, when you do "pip install clip" it installs it in the system python, you need to run the embedded python to install, in your comfyui folder, run the following command ".\python_embeded\python.exe -m pip install git+github.com/openai/CLIP.git" this will install clip in the correct place
@DaCashRap6 ай бұрын
@@GraftingRayman that's it! it works now, thanks for your help.
@UTA9996 ай бұрын
Do you have a link to download the mask files or do we need to create them ourselves? On you rGithub I noticed you have a multi-mask create node, but I think it doesn't create part as transparent? Also, how do you get the links to the Anything Everywhere to light up as you do in the video? Is it a setting somewhere once the node is installed? Thanks!
@GraftingRayman6 ай бұрын
You can use the Multi Mask Create, its transparent. The Anything Everywhere node has settings in the ComfyUI settings, its titled Anything Everywhere anime UE Links, select Both as the option and further below change Anything Everywhere show links to Selected and Mouseover node
@UTA9996 ай бұрын
@@GraftingRayman Thanks for the reply. I'll try generating the masks again - I just had the Multimask Create linked to Mask Previews then right clicked and saved the images. They just looked different to your video (they were black and white strips rather than black and grey) and the flow didn't seem to work for me. Not sure if it was because I only have 1 picture and so swapped the Load Image Batch for a Load Image node. I guess you could also possibly have the Multimask Create in this flow directly generating the masked images? I'll give it a try. I'll also take a look in the settings as advised. Cheers.
@GraftingRayman6 ай бұрын
@@UTA999 I use the multi mask create node on my updated workflow, works just the same, when you save the image it does not keep the transparency, when used in ComfyUI it does.
@UTA9996 ай бұрын
@@GraftingRayman Thanks for the confirmation. After a bit of playing around I now have all the issues I was experiencing sorted.
@sergiorobayo94395 ай бұрын
where did you get that AnimateEveryone/diffusion_pytorch_model.bin model from?
@@genso1540 looks like someone maybe licensing this now for a site or paid for service, you will need to find someone who has it saved
@GraftingRayman3 ай бұрын
@@genso1540 Looks like that has been removed by the author
@zraieee7 ай бұрын
good done, please how can I make showing laser light for nodes
@GraftingRayman7 ай бұрын
thats with anything everywhere node
@АллаКлевер7 ай бұрын
"When loading the graph, the following node types were not found: GR Prompt Selector Nodes that have failed to load will show as red on the graph." it's inside ComfyUI\custom_nodes but not work Help me please.
@GraftingRayman7 ай бұрын
run the following command in your custom_nodes folder or use comfyui manager, "Git clone github.com/GraftingRayman/ComfyUI_GraftingRayman"
@АллаКлевер7 ай бұрын
@@GraftingRayman "It's inside ComfyUI\custom_nodes but not work" - it's installed, but "Failed to load"
@АллаКлевер7 ай бұрын
@@GraftingRayman maybe it the same problem: ``` DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)... 2024-05-20 16:46:37.3574522 [E:onnxruntime:Default, provider_bridge_ort.cc:1534 onnxruntime::TryGetProviderInfo_TensorRT] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\Ai\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" *************** EP Error *************** EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported. when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying. **************************************** 2024-05-20 16:46:38.4698587 [E:onnxruntime:Default, provider_bridge_ort.cc:1534 onnxruntime::TryGetProviderInfo_TensorRT] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\Ai\SD\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" *************** EP Error *************** EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:456 onnxruntime::python::RegisterTensorRTPluginsAsCustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported. when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying. **************************************** ```
@АллаКлевер7 ай бұрын
I have no any D:\a folder at all, by the way
@GraftingRayman7 ай бұрын
delete the folder for the node and reinstall
@jinxing-xv3py6 ай бұрын
you are amazing~
@HeangBorin7 ай бұрын
Please help, i got message (IMPORT FAILED) GR Prompt Selector in manager
@GraftingRayman7 ай бұрын
what is the full error?
@HeangBorin7 ай бұрын
Dear Sir @@GraftingRayman, here is the log: File "C:\Users\ccc\OneDrive\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_GraftingRayman\__init__.py", line 1, in from .GRnodes import GRPromptSelector, GRImageResize, GRMaskResize, GRMaskCreate, GRMultiMaskCreate, GRImageSize, GRTileImage, GRPromptSelectorMulti, GRTileFlipImage, GRMaskCreateRandom, GRStackImage, GRResizeImageMethods, GRImageDetailsDisplayer, GRImageDetailsSave File "C:\Users\ccc\OneDrive\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_GraftingRayman\GRnodes.py", line 11, in from clip import tokenize, model ModuleNotFoundError: No module named 'clip'
@GraftingRayman7 ай бұрын
you may try "pip install clip", that should install the clip package
@Teardropbrut7 ай бұрын
@@GraftingRayman I have both Auto1111 and Comfyui portable installed. The clip package got copied to the Pinokio Miniconda folder from where I copied into Comfy Lib. Still wasn't able to load the node. Import failed via manager and also the same result when git clone, same content is the folder. I also seem to have Onnx and Onnxruntime installed.
@GraftingRayman7 ай бұрын
Both the clip and the clip info folders were copied to ComfyUI_windows_portable\python_embeded\Lib\site-packages?
@thefransvan59666 ай бұрын
Seems that your custom nodes extension, as well as the Inspire pack, don't import for me. I installed the CLIP dependencies but it didn't help.
@GraftingRayman6 ай бұрын
What error do you get?
@thefransvan59666 ай бұрын
@@GraftingRayman Regarding both Inspire pack and your extension it seems it has to do with a "c2" module that was not found. Looking a bit into the issue it seems I maybe have just python v3 installed and maybe i am required to use a 2.x.x version of Python instead?
@GraftingRayman6 ай бұрын
Python version v3 is fine, I am not aware of a C2 module that is required, will look into it
@thefransvan59666 ай бұрын
@@GraftingRayman I'm sorry i wrote it wrong, i meant CV2 module is not found when importing both of those extensions. Apologies.
@GraftingRayman6 ай бұрын
@@thefransvan5966 Aaah, you can simply run "pip install cv2" if you have system python or if you are using ComfyUI portable you can run in your ComfyUI folder ".\python_embeded\python.exe -m pip install cv2", that will resolve the issue with cv2
@Teardropbrut7 ай бұрын
Question, can add FaceDetailer after the Ksampler so that the different faces don't get averaged to be the same?
@GraftingRayman7 ай бұрын
Yes you can, I have done myself, but results show ksampler does a good job as it is
@lacasuela4 ай бұрын
i got this error : IPAdapterInsightFaceLoader No module named 'insightface' i have the ipadapter insightface loader plus
@r1nnk6 ай бұрын
Can't find.. how you made this glowing effect on routes?
@GraftingRayman6 ай бұрын
That is the Anything Everywhere node
@jd386 ай бұрын
Hi, the workflow connected, but the result cant make the face same as source. any solution? the only node i change is clip text prompt to default one.
@GraftingRayman6 ай бұрын
Can you send me a screenshot on discord or github?
@frustasistumbleguys49007 ай бұрын
I love it, but how do I make the three faces look the same after upscaling? Right now, they still look different
@GraftingRayman7 ай бұрын
Are you running them through a ksampler with lower noise after the upscale?
@webrest6 ай бұрын
- Value not in list: vae_name: 'AnimateEveryone\diffusion_pytorch_model.bin' not in [] getting above error not able to find this VAE
@GraftingRayman6 ай бұрын
If you have put the model in a different folder, you will need to change it, i manually copied mine in to the checkpoints\animateanyone folder
@MaghrabyANO5 ай бұрын
Hey, how to get the mask images? should they be just black and white spaces in paint?
@GraftingRayman5 ай бұрын
You can use the GR Mask Create node, if you want to use mask files they are transparent and black png files
@MaghrabyANO5 ай бұрын
@@GraftingRayman dumb question... how can I make the white area transparent in paint?
@MaghrabyANO5 ай бұрын
@@GraftingRayman Also what is the VAE you used?
@MaghrabyANO5 ай бұрын
@@GraftingRayman also, a secondary help, I managed to do the mask, but the ipadapter doesnt refrence the faces at all, even though im using the default setting in the workflow
@GraftingRayman5 ай бұрын
You can use the VAE that is built into the Juggernaut model if you are using that. If you have discord, you can send me screenshots there, can respond faster
@mahesh0012347 ай бұрын
Getting below error, File "C:\Users\Mahesh\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_GraftingRayman odes.py", line 8, in from clip import tokenize, model ModuleNotFoundError: No module named 'clip'
@GraftingRayman7 ай бұрын
run the following: pip install git+github.com/openai/CLIP.git
@mahesh0012347 ай бұрын
Please make a workflow for SDXL also, with InstaID for multiple people
@GraftingRayman7 ай бұрын
the results are very crappy with instant id for multiple people, not worth the effort
@n3bie7 ай бұрын
Awesome video sir. +1 Sub.
@aeit9997 ай бұрын
Subbed, cool help, thanks
@GraftingRayman7 ай бұрын
Awesome, thank you!
@badterry7 ай бұрын
damn bro. whats ur pc specs? FaceID runs so slow on my old ass machine
@GraftingRayman7 ай бұрын
haha, that speed is done by editing, its still slow as anything
@dennischo21407 ай бұрын
can you share your mask.png files for us? It's very useful
@GraftingRayman7 ай бұрын
you can use my mask create node instead, check my github link in the bio
@dennischo21407 ай бұрын
@@GraftingRayman Thank you~
@lukeovermind6 ай бұрын
Hey giit a sub from me, thanks this was great. Does adding a ksampler after SD Upscale in general improve quality?
@GraftingRayman6 ай бұрын
Yes it does
@cemilhaci27 ай бұрын
thanks, workflow link wont work
@GraftingRayman7 ай бұрын
if you right click the link and save the file as a .JSON it will work
@RodiZai-pk9ty7 ай бұрын
@@GraftingRayman No, still not working, apparently is a pastebin issue
@GraftingRayman7 ай бұрын
@@RodiZai-pk9ty you can download it from my github github.com/GraftingRayman/ComfyUI_GR_PromptSelector/tree/main/Workflows
@procrastonationforever5521Сағат бұрын
If it looks "pretty good" then you probably need to check at oculist... xD
@CravingWatermelon7 ай бұрын
Getting below error, Error occurred when executing VAEDecode: 'VAE' object has no attribute 'vae_dtype' File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)
@GraftingRayman7 ай бұрын
how much vram do you have on your gpu?
@jymercy50596 ай бұрын
Please help, i got message (IMPORT FAILED GraftingRayman) GR Prompt Selector in manager as well..i have used Stability Matrix
@GraftingRayman6 ай бұрын
try running this in your comfyui folder ".\python_embeded\python.exe -m pip install git+github.com/openai/CLIP.git"