It's a good result, my friend. I hope it works with pony models.
@MarvelSanya8 ай бұрын
Why, every time I try to do everything as in the video, instead of the whole character that I write in prompts, I only get pieces of it? As if it did not fit into the area that I selected in inpaint. In addition, the background of the character does not match the original image...
@AIchemywithXerophayze-jt1gg8 ай бұрын
My guess is that you're not using an in-painting model. When you're using a regular model, is sometimes has a very hard time matching the surrounding environment. As well as centering the subject and blending subject in.
@MarvelSanya8 ай бұрын
@@AIchemywithXerophayze-jt1gg you are right, I tried using regular models.
@Shabazza84 Жыл бұрын
Just a little tip for people: When you have such a pretty 1-point perspecive shot, it's super easy to get the size of the main person in the foreground right. Take the head/eye level of the other people in the background, put an (imagined) horizontal line in at their eye level and then paint the mask for the foreground char, so the head/eyes of that char will roughly end up on that horizontal imaginerd line. Then the char will have the exact height of the other people in perspective. You can ofc make that person smaller or taller now, but with this, you have the 1:1 height and can work from there.
@HypnotizeInstantly Жыл бұрын
Thank you for listening to my previous comment! Releasing this on June 30th is a birthday gift from you!
@jean-baptisteclerc1586 Жыл бұрын
any tutorial to add realistic AI people into an existing 3d rendered image?
@AIchemywithXerophayze-jt1gg Жыл бұрын
This could probably be done just through the prompt, but you would probably need to do inpainting. Render the scene as a whole 3D rendered scene, including 3D rendered people. Then going to in painting mask out the people and then use a checkpoint or model for realistic looking stuff and then change the prompt to photographic or realistic.
@1986xuan Жыл бұрын
@@AIchemywithXerophayze-jt1gg Do you think you would make a tutorial on that? Dealing with realistic people in an existing scene / setup? That would have been an amazing project as photo content creation for local businesses like existing coffee shops, gyms, restaurant..
@HypnotizeInstantly Жыл бұрын
What is the difference between using inpainting model vs regular model? because I am getting no difference between them. Also whenever I do inpainting stuff, I enable controlnet - inpaint - global harmonious inpaint as preprocessor. Doing so would blend the whole masked generation to be in fit with the whole picture more harmoniously. Also you can crank up the denoise strength and it won't produce any weird distortions in the image.
@AIchemywithXerophayze-jt1gg Жыл бұрын
Not using an inpainting model can cause some weird effects. The most noticable thing is that it did not blend the edges correctly, it usually renders a different image in front of the original, but keeping the original behind the new one.
@Shabazza84 Жыл бұрын
You can partially mitigate that, by using inpaint "whole picture" instead of "only masked". The blending will be better, due to using more context. But this can ofc lead to issues when trying to inpaint detailed/small sections.@@AIchemywithXerophayze-jt1gg
@MrSongib Жыл бұрын
13:55 if you want introduce new consept in the scene is always go for high sampling steps, or just tick the box in "With img2img, do exactly the amount of steps the slider specifies. (normally you'd do less with less denoising)" it will do the exact steps, because in Img2img the formula for sampling steps is "sampling steps * denoising strength" so 39*0.95 = 37 sampling steps in this case. kzbin.info/www/bejne/jGLEknVtisyDba8 and consider mask the shadow area aswell. and use "Fill" if using "original" is a bit stubborn 26:00. padding pixel is inside the h*w, not the outside of it so it's actually using less resolution for buffer, it's actually similar to make a dot outside the main mask area to read other areas.
@AIchemywithXerophayze-jt1gg Жыл бұрын
I didn't know the formula, I just knew that it would take more steps the higher the denoise strength. I'll adjust try the exact number of steps. I use the technique of putting a mask dot somewhere else in the image when I want to remove something like a watermark out other object. Thanks for the tips.
@Yeeeeeehaw Жыл бұрын
Can you kindly explain the difference between usage of fill and latent noise
@ricardoborgesba Жыл бұрын
I would like to put an exacly png inside the scene, or copy as much as possible mixing, is that possible anyway?
@AIchemywithXerophayze-jt1gg Жыл бұрын
In a way I think yes you could do that. Using Photoshop to copy and paste the image into the scene, then using in painting to blend the edges better.
@Rasukix Жыл бұрын
I feel like it would be more efficient to swap into photopea and do a stick man and inpaint
@AIchemywithXerophayze-jt1gg Жыл бұрын
It very well may be. Especially if the area your trying to put someone did not have a lot of variance in pixels. Like a solid color.
@Aristocle Жыл бұрын
I wanted to use Latent couple, but in this latest version of automatic1111 it doesn't seem to work (using inpanting based SD1.5 models). Is it possible to add entire scenarios with inpainting? such as entire furnished rooms or natural sceneries of a landscape.
@AIchemywithXerophayze-jt1gg Жыл бұрын
Within painting yes you can add as much detail as you want.
@lilillllii2468 ай бұрын
Thanks. A bit of a different question, is there a way to naturally synthesize the character files I want in an existing image background file rather than a text
@AIchemywithXerophayze-jt1gg8 ай бұрын
If I understand correctly, you want to use an existing image as the background and add characters to it. Yes this is absolutely possible. Join my discord and we can help. discord.com/invite/EQMyYbtw
@cce7087 Жыл бұрын
If I'm interested in creating a scene where I add multiple additional characters but want them to be specific (ie from a seed), is this possible and how? I want to create a number of images with multiple characters in various scene changes. I would prefer not to learn latent couple and is it called component net - you mentioned it, can't recall off to of my head. Hoping there's an easier way to do what I want!
@AIchemywithXerophayze-jt1gg Жыл бұрын
Working with multiple characters that you want to be consistent between images is very difficult even when using controlnet. It is possible but honestly you should just join our discord and ask some of the users there because I think some of them had some ways of implementing roop and controlnet. discord.gg/HWksPdT6
@dthSinthoras Жыл бұрын
Do you also have a workflow to get something unnatural on people? Like blue skin, without making half the image blue. Or give someone cat-eyes without transforming him into something with cat-ears and stuff. Or striped hair with 2 specific colors. etc. For my feeling these type of things without colorbleeding is the hardest thing to achive if you hvae something kinda specific in mind.
@AIchemywithXerophayze-jt1gg Жыл бұрын
So with the tiny details like eyes, definitely use inpainting, for whole image changes without bleeding you would want to use regional prompting. I was going to do a video on that a while ago and completely forgot. I think I'll do that along with the micro detailing.
@dthSinthoras Жыл бұрын
@@AIchemywithXerophayze-jt1gg Looking forward to see that then :)
@AIchemywithXerophayze-jt1gg Жыл бұрын
Using the BREAK command might actually keep with this. I've been messing around with it and got it built into my prompt generator now. I'll try what you talked about.
@AIMusicExperiment Жыл бұрын
As usual your tutorial is helpful! Thanks fo all you do. My guess with the problem you were having is the nature of how the AI intruperats the prompt. Rather than hearing you say that it is a local artist in the picture, it thinks that the image was drawn by a local artist, Like if you had written "Masterpiece by Rembrandt." That is my thought.
@AIchemywithXerophayze-jt1gg Жыл бұрын
It's possible. I think I'm this instance it was a combination of the description of the artist in the prompt and what the fruit stand looked like. The fruit stand just looked too much like the artist description and so the AI didn't see a need to change much.
@bobtahar Жыл бұрын
May I know what is the extension called to zoom the inpaint image?
@AIchemywithXerophayze-jt1gg Жыл бұрын
It's built into a1111. It's called canvas zoom and paint. Check the top left corner of the inpaint window. You should see a little "i" that if you hover over it should give you some info
@377omkar7 ай бұрын
can you give that prompt that you gave to chat gpt for genration of prompts
@AIchemywithXerophayze-jt1gg7 ай бұрын
I've changed it to an online prompt generator. It's a subscription based service now, but I offer a free version here: shop.xerophayze.com/xerogenlite
@AutumnRed5 ай бұрын
I've been trying this but I never get anyything slightly similar to what I want, stable diffusion is so frustrating
@AIchemywithXerophayze-jt1gg5 ай бұрын
I agree, it can be.
@Yeeeeeehaw Жыл бұрын
Great video
@AIchemywithXerophayze-jt1gg Жыл бұрын
Thanks!
@tonisins Жыл бұрын
hey, would you mind sharing the chatGPT prompt for creating SD prompts?
@AIchemywithXerophayze-jt1gg Жыл бұрын
It's something I sell in my store. Https://shop.xerophayze.com Join our discord. I'm working on an extension for automatic 1111 that will interface with GPT better than others. It will be free. But works great with my prompt generator. discord.gg/mTEGXWMw
@BabylonBaller Жыл бұрын
Great tutorials as always, its just very difficult to hear you on a mobile phone as it seems you're recording with a builtin laptop mic so its super low
@AIchemywithXerophayze-jt1gg Жыл бұрын
Thanks for pointing this out, I have a desktop computer, and a nice mic, but I also have a web cam. And I think my recording audio somehow got switched to the web cam instead of my mic
@BabylonBaller Жыл бұрын
@@AIchemywithXerophayze-jt1gg ah yes, indeed it sounds like your far away from the mic, which has happened to me when the C920 mic switches back to default.
@davidchi501 Жыл бұрын
@@AIchemywithXerophayze-jt1gg I'd recommend using Adobe Enhance to make your mic audio sound clearer.
@DigitalAscensionArt Жыл бұрын
Thank you for always uploading great content for advanced techniques. How do I get into your discord?
@AIchemywithXerophayze-jt1gg Жыл бұрын
discord.gg/TFUzHtzA
@baraka99 Жыл бұрын
Wish you streamlined your videos to like 22m or so, reaching the same final result and explaining the process.
@AIchemywithXerophayze-jt1gg Жыл бұрын
Working on it.
@xehanort3623 Жыл бұрын
I can't add people it just spits out the same image
@AIchemywithXerophayze-jt1gg Жыл бұрын
It's not the most easy of processes to use. Takes a lot of practice to get it to do it correctly. A lot depends on the pixels your masking out. Are they extremely uniform, like a solid color, that's the most difficult to try and change. You may need to switch to "fill" instead of "original" to get it to put something there, anything, then switch back to original she try again
@eugeniakenne2865 Жыл бұрын
"PromoSM" 😱
@AIchemywithXerophayze-jt1gg Жыл бұрын
?
@Officemeds Жыл бұрын
The pain ohvof the ain? Helpe no ono! Why gof go why!!!! Blond is everywhere someone call pp1!!! Shlock is the sound his head make stabing
@AIchemywithXerophayze-jt1gg Жыл бұрын
?
@Officemeds Жыл бұрын
@@AIchemywithXerophayze-jt1gg Ambian and a Fuzzy Navel. My bad bro. I don't even.
@octopuss3893 Жыл бұрын
bla bla bla bla...................
@AIchemywithXerophayze-jt1gg Жыл бұрын
Yup
@relaxation_ambience Жыл бұрын
And again: all your tutorial would easily fit in 10 min. Now we need to watch all your imperfections, unsuccessful experimentations and wait for picture rendering. Of course it's possible to skip manually, but this is annoying. All what you did would be totally acceptable on LIVE stream, but not as you provide now. Maybe this is a reason, why your subscribers list grows so slowly.
@AIchemywithXerophayze-jt1gg Жыл бұрын
That's interesting that you think that. The only thing I cut out of my videos is when I am clearing my throat or have to cough. And if I run into something seriously wrong that's going to take me a while to fix. But in this video, the only thing that gets cut out are the coughs and clearing my throats. I understand your concern, and yeah a lot of my tutorials are going to be based around simple concepts. But a lot of people out there are still trying to figure out the simple concepts. And I'm more than happy to provide that.
@relaxation_ambience Жыл бұрын
@@AIchemywithXerophayze-jt1gg Thank you for the answer. But my only concern was what I figured out, that it's possible to shorten a lot. I like slow pace (for example as youtuber Olivio Sarikas), but I found myself a lot of skipping 5-10 seconds forward and not missing information.
@Dash3105 Жыл бұрын
I seriously disagree with this take. I think showing the problems he runs into and his thought process into fixing them adds more value to the tutorial and gives more knowledge. If you want to skip through the troubleshooting that's fine. But I think showcasing them is better.
@relaxation_ambience Жыл бұрын
@@Dash3105 What you say usually happens in live stream where you experiment and search how to fix the problems. In tutorials you get polished and finished product only mentioning about possible problems and how to overcome them. His tutorial seems that just recorded raw live stream. So he could put in the category "live streams" and then would be clear that we will see lots of experimentation and searching how to solve the problems.
@Yeeeeeehaw Жыл бұрын
@@Dash3105I second this I learned a lot from those mistakes in the video