I wish every AI tutorial was detailed and had examples like this one, thanks!
@controlaltai Жыл бұрын
Thank you!!! Glad it was helpful! And more to come.
@fernando7498457 ай бұрын
Excellent tutorial!!
@ultimategolfarchives4746 Жыл бұрын
Detailed theory + exemples and uses = viewers happy 😁 Your tutorials are gold. Thanks 🙏
@controlaltai Жыл бұрын
Thank you so much. Appreciate it, you just made my day. 🙌
@ultimategolfarchives4746 Жыл бұрын
@@controlaltai I encounter an issue exclusively when utilizing the thinkdiffusionxl model (which is currently my preferred choice). This inconsistency arise when I repeat face correction process from 12:17 to 14:09. It's worth noting that the same problem appears to occur during the inpainting process as well. However, when I experiment with the same generations from your video, and simply switching models (I tried with "sd_xl_base_1.0.safetensors [31e35c80fc]" and "realvisxlV20_v20Bakedvae.safetensors [74dda471cc],") the results are flawless. The colors in the corrected area seamlessly match the rest of the image. The Vae is set to automatic. Do you have any idea what's could be the problem using the model?
@ultimategolfarchives4746 Жыл бұрын
(In case it might assist others) I've at last located it! I downloaded the "stabilityai/sdxl-vae," and it resolved the issue! Cheers ✌
@controlaltai Жыл бұрын
Thanks, I posted a new pinned comment with the solution and mentioned your name.
@ultimategolfarchives4746 Жыл бұрын
@@controlaltai My pleasure 👍
@DennisFrancispublishing8 ай бұрын
My dude, you're doing a great job with the explanations. I love the voice you chose for the tutorials. Keep doing this cause it perfect.
@Noobinski5 ай бұрын
That was extremely helpful indeed. Thank you for showcasing how to do it. Not many do (or even know what they're talking about).
@controlaltai5 ай бұрын
Thank you!!
@EasyAINow11 ай бұрын
Your tutorials are among the best I've found. I hope you keep them coming!
@controlaltai11 ай бұрын
Thank you!
@KrnelPanc Жыл бұрын
By far the best video on ADetailer around
@controlaltai Жыл бұрын
Thank you!
@divye.ruhela5 ай бұрын
Wow, the details are unreal! Trying this for sure and reporting back!
@wholeness Жыл бұрын
Best A1111 tutorials on the web!
@controlaltai Жыл бұрын
Thank you! Kind words 🙏
@KrnelPanc Жыл бұрын
I was looking for ADetailer info. And stumbled across this video and heard your voice and was like yes!! The guy who got me started with SD!
@controlaltai Жыл бұрын
Awesome! Thank you!
@zomgneedaname11 ай бұрын
Nice work man, helped me fix faces real quick. instant sub!
@marcus_ohreallyus Жыл бұрын
If you make any characters, this is a must have extension. You can basically design a face by using the names of 2 or 3 well known actors and weight them to balance out the look.
@nikgrid8 ай бұрын
Dude Your tutorials are EXCELLENT!
@pankajroy5124 Жыл бұрын
*Thankyou for this wonderful tutorial*
@controlaltai Жыл бұрын
Always welcome and thank you for the support. 🙏
@AdamKusumaPutra9 ай бұрын
THANK YOU! You are a great teacher
@kenz77889 ай бұрын
Very helpful! Keep it coming!
@felipe.richter Жыл бұрын
I really liked your tutorial 😉
@controlaltai Жыл бұрын
Thanks to @ultimategolfarchives4746 for finding the solution: if you face issues like coloring or a whitish box on the face with inpainting or using Adetailer with checkpoint Think Diffusion XL, please try to download the SDXL VAE (sdxl_vae.safetensors) from huggingface.co/stabilityai/sdxl-vae/tree/main and Set the VAE to sdxl_vae.safetensors instead of Automatic. It should resolve the issue.
@arrssenne8 ай бұрын
your the best, with your tutoriel , my image made in ai , are 100% better
@odessaodesa53077 ай бұрын
Thanks! 👍❤
@hankkingsley91838 ай бұрын
Great stuff I only wish I had a mute button for the music looping the background
@imoon3d2 ай бұрын
I used adetailer but the hand is still faulty
@valorantacemiyimben6 ай бұрын
Hello. thanks a lot. Can we add make-up to a photo we added ourselves? How can we do?
@controlaltai6 ай бұрын
Yes, Load it in image to image instead of text to image.
@twilightfilms9436 Жыл бұрын
I assume this works also with img2img and batch sequencing, but does it keep consistency?
@controlaltai Жыл бұрын
Depends on the source, if the source images are proper, using very low denoising, same checkpoint, setting could give consistent images in batches. The main advantage of this is processing in batch.
@sup20695 ай бұрын
Mine only has 2 tabs. How to enable the 3rd tab? Its missing.
@CerisCinderwolf14 ай бұрын
Loving the guide so far but running into a problem and have a question- you used a ControlNet for the clothing but during the installation, didn't talk about enabling/installing ControlNet with ADetailer. Currently that dropdown doesn't open/is grey as if I have no controlnets. Where might I go hunting for this if something's missing? This is a fresh install as of watching your video :)
@controlaltai4 ай бұрын
Thanks. The interface might be slightly different since the video is old. However here is everything you need to know about control net with a1111. The basics atleast. kzbin.info/www/bejne/gJWmqnqBi8x7gas
@CerisCinderwolf14 ай бұрын
@@controlaltai I appreciate your reply and the link! I'm already running into the one most common issue with ControlNet installation (the dreaded "WinError 5: Access is denied"). I'll post over there though since it's a video specific to ControlNet! :)
@artymusoke13527 ай бұрын
Thanks for the tutorial. ive been using adetailer with no problem for afew months now. i recently made my first lora for braided hair, however the adetailer ads like an opaque square around the characters face in the final result. anyone know what could be causing this?
@controlaltai7 ай бұрын
That's happens on certain checkpoints, try using a different checkpoint.
@Epulsenow6 ай бұрын
Great video, but the models are pickle one and warning on hugging face! any suggestions?
@controlaltai6 ай бұрын
There was no warning at the time of making the video. I don’t have any other source, the one listed in comfy ui also re directs to the same link.
@yyarcc19 күн бұрын
Can we use this extension for building details ? or is this just for humans detail ?
@controlaltai19 күн бұрын
Human. Building details use a Lora called detailer.
@Marshy-- Жыл бұрын
Can you use adetailer after reactor? Can't seem to find a way to edit the order of things using Auto1111. Presume I need to go Comfy for this level of control. Would be ideal to do it before and after.
@controlaltai Жыл бұрын
I have not tried Reactor yet at that level. I have to give it a try in A1111.
@controlaltai Жыл бұрын
Hi, I checked again, it’s better to do it in comfy. If research and test further to find a way, it’s easy in comfy and more flexible. I suggest you create a unique workflow for comfy. If you want me to create a tutorial for it, let me know the exact details I will use that as 1 example and create a workflow tutorial, the way we do it in the channel.
@marshy.. Жыл бұрын
That would be super helpful. I've been struggling with comfy. In regards to Reactor: Upscale faces with nmkd 8x faces has given me the best results so far. 4x-8x upscale. Gfpgan 0.5 visibility was also the best option. I'd love to be able to adetailer after face swap to fix the eyes, cheeks etc. Do you have a workflow I can have a look at?
@controlaltai Жыл бұрын
@marshy.. As of now I don’t have any workflow. I have to research and create something. I am taking appropriate permissions from the company to allow me to make the video, after and I get it. I will research and make one in either a1111 or comfy whatever is possible. I will try and do it for the next video itself. So please stay tuned. Thank you.
@marshy.. Жыл бұрын
Excellent, thanks for your work. Appreciate it.
@user-ct8my8rv9c4 ай бұрын
Your hugging face link displays a red box warning that says 4 of the PT files were deemed as unsafe. What do you make of this? Are they safe?
@controlaltai4 ай бұрын
Contact the dev on Git Hub or Hugging Face to find out why he has not resolved the issue; as I stated earlier in the comments, some files have been marked unsafe months after the video went live, and I have no Idea why. In Comfy UI, when we download the models, they redirect to this same hugging face link, so I don't have any other source for the same. Basically, you can check the readme here: huggingface.co/Bingsu/adetailer/commit/b0a075fd35454c86bb453a1ca06b29ffee704c20 He has updated the readme, which caused a false flag, but it has not yet been removed from the hugging face (the unsafe mark).
@via_kole11 ай бұрын
I cant seem to find the eyes only yolov8 model. Is there a link you could provide? or is it in the link for the extra models? if so, which one is it as there are no discriptions
@controlaltai11 ай бұрын
Eyes are - mediapipe_face_mesh_eyes_only. Comes with default extension installation.
@via_kole11 ай бұрын
@@controlaltai awesome! thank you
@Gothdir Жыл бұрын
I think Im stupid, I cant find the folder where Adetailer saves the mask previews. Whats the default folder for it? So changing cloth is a bit weird for me, not sure what's the problem is. Adetailer detects the cloth correctly and controlnet starts as expected but the changes to the cloth are very subtle. All it does is change small details, I can't bring it to change colors or let alone a make it completely different cloth. Everything else works great! Good tutorial! EDIT: NVM, I didnt realize you have to save the output manually. I assumed it would get save all the time automatically.
@controlaltai Жыл бұрын
Hi, thank you, the changing of the clothes is checkpoint and in paint denoise strength setting. Try using a higher in paint denoise setting. If you choose 1, it will make something totally new, if it goes beyond boundaries then use controlnet as well.
@Gothdir Жыл бұрын
@@controlaltai I tried ,without controlnet. It did produce something different but it didnt fit and with controlnet there is barely any change. It feels like, whatever I put into the Adetailer prompts gets significantly lower weight than the normal prompt. Even when changing the eyes there was some odd behaviors. For example, typing "red iris" the iris wouldnt change no matter the noise setting and no matter the checkpoint (I tried several over all versions from SD1.5, SD2.1 to SDXL). Only changing the face worked more or less as intended. I typed "blue lipstick" and got blue lips and eyes and eyebrows but at least it did somewhat as expected.
@controlaltai Жыл бұрын
@@Gothdir Hi, try the clothes with a different checkpoint. Use control net and scale up the denoise strength to 1. For control net try the different models, they will give different reaults. For the eyes, try putting blue, green etc, colors only. If results are not coming, try another checkpoint. Mostly here you have to go with photo realisms checkpoints like think diffusion, realistic vision or realvis xl etc, juggernaut xl, etc. are you using eyes mesh only? Don't use face for changing eye. Let me know if this helps.
@Gothdir Жыл бұрын
@@controlaltai Ill give it a shot tomorrow its getting kinda late here. Will hit you back! Thanks for the advice in any case. :)
@Gothdir Жыл бұрын
@@controlaltai It took a bit of fiddling around, but I made it work with a different checkpoint and altered prompt. Thanks again for the advice.
@johnwolf108 ай бұрын
A bit of help please, when i change clothes it works but won't change the color of it at all.
@controlaltai8 ай бұрын
Try higher denoise and change checkpoint. It's a bit limiting.
@NgocNguyen-ze5yj6 ай бұрын
any chance for ComfyUI version tutorials? thanks so much
@controlaltai6 ай бұрын
Here ComfyUI: Face Detailer (Workflow Tutorial) kzbin.info/www/bejne/labEgGqMhNtmfKM
@NgocNguyen-ze5yj6 ай бұрын
@@controlaltai could you please help for change the face?
@NgocNguyen-ze5yj6 ай бұрын
@@controlaltai take another face to replace the original one?
@controlaltai6 ай бұрын
@@NgocNguyen-ze5yj I cannot make a tutorial for that due to some restrictions. But there are in KZbin, search for reactor comfy tutorial.
@Avatars3d Жыл бұрын
Can this be used with comfyui?
@controlaltai Жыл бұрын
Yes but it's different, I was planning to make a video of it. Just released control lora in comfy tutorial video.
@Vagabundo966 ай бұрын
where does it save the original image? 3:24
@controlaltai6 ай бұрын
Same output folder….
@Vagabundo966 ай бұрын
@@controlaltai nvm I thought it would save like 2, 3 images but there's only the output
@controlaltai6 ай бұрын
@@Vagabundo96 There should be an output and after detailer segment overlay. If it is not, something must have changed with the latest update. I am not sure as I have moved on to Comfy UI. I suggest when it generates, you can still manually right click and save any image from the UI, if for some reason it is not saving.
@jjsc33348 ай бұрын
deepfashion2_yolov8s is unsafe file, can you verify?
@controlaltai8 ай бұрын
It's the same file used within the comfy manager. I don't know why hugging face has marked it as unsafe. I have the file on my drive for months now. However I cannot find a new source to download it. Best to ask the original author.
@phuongmedia5 ай бұрын
Can you give me your standard negative prompt?
@controlaltai5 ай бұрын
There is no standard negative prompt. It depends on the checkpoint. Check sample images of the checkpoint you are using on civitai. For some checkpoints like sd3, we start with no negative, and add when required.
@makadi867 ай бұрын
does this work with sd forge
@controlaltai7 ай бұрын
I think so. But I haven't tested on forge.
@spiderchannel15827 ай бұрын
Is Adetailer Safe??
@controlaltai7 ай бұрын
Yeh it’s safe, for some reason the models got flagged on hugging face some months after the video release. I don’t know why. Even today in comfy ui, it redirects to the same mode. Personally I have not face any issues.
@AutumnRed8 ай бұрын
my issue with these upscale methods is that they always change things they should not, faces never look the same and it's annoying when you want to upscale old family fotos just to end up having the picture of some strangers who resembles your relatives instead of just an upscaled foto of them
@controlaltai8 ай бұрын
There are techniques to upscale photos without changing the faces. Unfortunately, I do that in comfy ui and it’s a bit vram intensive. New SUPIR tech has come which will purely upscale without changing faces. I have yet to cover a SUPIR tutorial for comfy ui.
@Chonky_Nerd11 ай бұрын
When i do this with the eyes, my prompt says light blue eyes, but if i put say dark makeup on it changes the eye color too :(
@controlaltai11 ай бұрын
There is a way to do this. If you use face, It will take the whole face. The eyes mesh only takes the eyes. So the first ADetailer should be face, put the makeup on. The Second should be eyes only. This will preserve the makeup.
@Chonky_Nerd11 ай бұрын
@@controlaltai thanks for the reply! I have set the face in the ADetailer (added the prompt dark makeup), set the second detailer to be eye mesh only (i added light blue eyes in the prompt here because it was stil taking them away) but they are not as bright as they were before i started
@controlaltai11 ай бұрын
Some solutions and tricks which I use: One solution is you can add a 3rd Adetailer eyes only, select light blue eyes and this time use a separate width and height and double or 1.5x for the eyes only. Keep the prompt same light blue eyes. Other solution is try (((light)))((blue eyes)) with only 2 adetailers. Reduce the brackets if it is too much. Sometimes the problem is with checkpoint. So for the eyes only, use a separate checkpoint and try the above. You can even add eye detail lora or detail lora only for the eyes in the eyes prompt. Just search for something on Civit AI, there are plenty for SDXL and SD 1.5. I hope this helps. Let me know how it goes.
@Chonky_Nerd11 ай бұрын
Again, thank you so much for the detailed reply, I'll give it a go tomorrow, and I'll let you know for sure 💜🤘🏻
@procrastonationforever552111 ай бұрын
Nah, I will never understand the anime. It is just primitive, cheap and stupid art. IMHO! Those flat faces, similar eyes and mouths, copy-pasted emotions, lack of individuality... Cheap and bland, anime just no go for me.
@procrastonationforever552111 ай бұрын
I can't express myself enough how I hate in tutorials about a tool the part describing how to install a tool... This world is doomed for sure...
@fretts888810 ай бұрын
If only there was a way to skip forward eh?
@zootjitsu67678 ай бұрын
Worst is in videos where they are talking about sometime and they decide to give the life story of the thing. You watch a video about a car and first thing the guy does is “the car is an automobile invented in 1882 by John car in…”
@EveryBeardHasAStory6 ай бұрын
Think I might break into a white's house and steal their pc. My generation takes ages and given my entire life is spent in front of a computer, I find it illogical that I shouldn't have one that can run anything worthy of note.
@controlaltai6 ай бұрын
Just upgrade your GPU. That's should be fine. These new ai tools are very GPU hungry.
@virtualinfluencer10 ай бұрын
You do quite the job here. Can you comment on the 5 files "person_yolov8n-seg.pt , deepfashion2_yolov8s-seg.pt , person_yolov8s-seg.pt , person_yolov8m-seg.pt , clothing_poor_yolov8s-seg.pt" that have been reported unsafe and the work around this?
@controlaltai10 ай бұрын
Thank You for bringing this to my notice. I think they are false positive. I am not sure. I double Checked, ComfyUI also redirects to the same link given on hugging face to download the Models. The models are by a Company Called Ultralytics: github.com/ultralytics/ultralytics The main models code is here: github.com/ultralytics/assets/releases/ Expands Assets. However they will not work with ADetailer. You can ask the developer directly what's the issue here: github.com/Bing-su/adetailer/discussions
@lonewolf-vw9wf8 ай бұрын
This model has 5 files that have been marked as unsafe. View unsafe files person_yolov8n-seg.pt , deepfashion2_yolov8s-seg.pt , person_yolov8s-seg.pt , person_yolov8m-seg.pt , clothing_poor_yolov8s-seg.pt