fantastic. thanks for the guide. I just wonder why you choose "default negative" aside from "digital painting", i am new to this and ive never used "default negative", what does it do? EDIT: it worked well with the word "cowboy" but when i type "green monster boy" the result becomes a mess and not at all like the cowboy ballerina
@ThinkDiffusion3 күн бұрын
Hi there! Default negative is a preset of negative prompts that works well for most of the images in sd 1.5. That is why Sebastian chose it :)
@gandonius_meАй бұрын
Hello, I would like to ask this question. Why do I set openpose_full and control_v11p_sd15_scribble [d4ba51ff] then I get the skeleton of the image (source) then I click generate, but the image is created according to the prompt without a skeleton, it’s just created according to the prompt
@Cu-gp4fy2 ай бұрын
seems to have more control than mid journey nice!
@ThinkDiffusion2 ай бұрын
Yes, with Stable Diffusion you have way more control than any other AI image generators:)
@Vanced2Dua2 ай бұрын
Mantap... Lanjutkan tutorial A1111 saya sangat menyukai
@ThinkDiffusion2 ай бұрын
Thank you for the positive feedback, more tutorials coming every week!
@darbycarpenter303226 күн бұрын
I have the Canny button but there is no model. I went to the model folder and nothing is there. All that I have is the openpose. Could you post a link to the Canny model?
@ThinkDiffusion24 күн бұрын
Hi there! Of course, here you go: huggingface.co/lllyasviel/sd-controlnet-canny Happy generating!
@4thObserverАй бұрын
I usually do manual inpainting for fingers instead, Much more accurate that way and gives us humans something left to do. (Lol.) ControlNet I'll use for when I want a real photo as the pose reference or a specific architecture as background because you can layer these together.
@ThinkDiffusionАй бұрын
Good point, yes it can be more accurate and if you enjoy the process of doing it that's all that matters!