Hi Sebastian, is it possible to add custom background image and blend it with a subject to create a realistic photo? I don't want to create ai generated background, I need to use my own background.
@ThinkDiffusion3 күн бұрын
Yes, of course - the background image doesn't have to be AI generated at all. You might need to play around with a few different models to find a realistic one that will generate similar inpainted images to the background. You can also join our Discord and ask for any help here: discord.gg/hAm87ApunD
@BrookeKemp-q7p6 күн бұрын
Appreciate the detailed breakdown! I need some advice: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). How can I transfer them to Binance?
@RapidReachAI13 күн бұрын
The way you integrated the QR code looks super cool might try this out myself
@ThinkDiffusion7 күн бұрын
Yes, it is super cool! You can join our community on Discord & send what you create here: discord.gg/hAm87ApunD
@JeffreyHarrington14 күн бұрын
Thank you!
@ThinkDiffusion14 күн бұрын
Of course, glad you found it helpful! Happy generating:)
@Jacksmith-qu9ii18 күн бұрын
nice work do you think that I could run this work flow on a linux 12gb Vram setup on a nvidia 3090 Ti ?
@ThinkDiffusion18 күн бұрын
Hi there! Yes, it may be enough - although will be definitely slow. You can run it without any problems on ThinkDiffusion on this machine: www.thinkdiffusion.com/select-machine/featured/comfy/beta/turbo Happy generating!
@DisgustingJustinAD22 күн бұрын
Is ThinkDiffusion a locally installed version of Stable Diffusion? Or can I only use it through internet access? I get a lot of heavy storms in my area & lose access to internet during those times.
@arsletirott28 күн бұрын
fantastic. thanks for the guide. I just wonder why you choose "default negative" aside from "digital painting", i am new to this and ive never used "default negative", what does it do? EDIT: it worked well with the word "cowboy" but when i type "green monster boy" the result becomes a mess and not at all like the cowboy ballerina
@ThinkDiffusion20 күн бұрын
Hi there! Default negative is a preset of negative prompts that works well for most of the images in sd 1.5. That is why Sebastian chose it :)
@Lobster_With_A_GunАй бұрын
I fucking hate ai
@Cu-gp4fyАй бұрын
Can I get a koooohyaaa
@ThinkDiffusionАй бұрын
Love it! 🙌
@maziar1382Ай бұрын
hey more dad jokes, please
@ThinkDiffusionАй бұрын
🫡🫡
@maziar1382Ай бұрын
@ThinkDiffusion BTW thanks for those free stable diffusion styles
@darbycarpenter3032Ай бұрын
I have the Canny button but there is no model. I went to the model folder and nothing is there. All that I have is the openpose. Could you post a link to the Canny model?
@ThinkDiffusionАй бұрын
Hi there! Of course, here you go: huggingface.co/lllyasviel/sd-controlnet-canny Happy generating!
@paul.tergeist.41Ай бұрын
A huge thank you for helping the new creators on A1111, Sebastian, because there are really not many complete tutorials. Personally I started in May, almost on my own because I really had to look for information everywhere. A1111 has excellent potential, despite the bugs of the hands etc..... while waiting for SD3. Congratulations and good luck with your channel... 👍🙏🙏🙏
@ThinkDiffusionАй бұрын
Awesome to hear that, we're glad it was helpful and good luck with generating:) Any specific tutorials you would want to see? 👀
@paul.tergeist.41Ай бұрын
@@ThinkDiffusion A video, for example, on where to find styles and renderings.... and have a simple and magical and detailed method to add styles (why change the files. CSV and how)..... in short, something simple and useful..... 🙏🍀🤞
@Cu-gp4fyАй бұрын
Pretty coolseems to be a lot of buttons to play around with and customize
@ThinkDiffusionАй бұрын
Yes, it's super cool!!
@chrismall-x6gАй бұрын
fantastic! it was so interesting to listen to. i wanna play it
@ThinkDiffusionАй бұрын
Thank you! Yess, we can't wait to try it out as well 🙌
@Cu-gp4fyАй бұрын
the game art looksgreat love the hidden image look pretty trippy
@ThinkDiffusionАй бұрын
It really does, so creative! 🧙♂
@stableArtAIАй бұрын
reference to good text, enable T5 for text although SD does know many words which is.a big part in creating text objects.
@stableArtAIАй бұрын
Our fist run with flux did not yield as good result as there online demo. We might put some time into later or early next year as it continues to evolve. It was though pretty easy to get it running under OS X.
@ThinkDiffusionАй бұрын
Hi there! If you need some help you can always join our Discord here: discord.gg/hAm87ApunD See you in there:)
@Ai_mayyit2 ай бұрын
Can we change the outfit with this workflow?
@ThinkDiffusionАй бұрын
Yes indeed you can change the outfit, you can also change the scenery and the face:)
@gandonius_me2 ай бұрын
Hello, I would like to ask this question. Why do I set openpose_full and control_v11p_sd15_scribble [d4ba51ff] then I get the skeleton of the image (source) then I click generate, but the image is created according to the prompt without a skeleton, it’s just created according to the prompt
@MartinBenesCreative2 ай бұрын
Very good video. Straight to the point. Keep going with this channel. It will grow fast! Good luck buddy! 🎉
@ThinkDiffusion2 ай бұрын
Thank you Martin, we appreciate this a lot especially coming from you! Have a great day:)
@4thObserver2 ай бұрын
I usually do manual inpainting for fingers instead, Much more accurate that way and gives us humans something left to do. (Lol.) ControlNet I'll use for when I want a real photo as the pose reference or a specific architecture as background because you can layer these together.
@ThinkDiffusion2 ай бұрын
Good point, yes it can be more accurate and if you enjoy the process of doing it that's all that matters!
@Truthseeker_126382 ай бұрын
Hey can you please post the workflow somewhere easy to access I can’t find it
@ThinkDiffusion2 ай бұрын
Hey there! Yes, you can download the workflow here (check the comment section): www.linkedin.com/feed/update/urn:li:activity:7240010660099616768
@Cu-gp4fy2 ай бұрын
Super creative application of thisworkflow thanks for sharing guys! could use skme extra cloud help my laptop is garbage
@ThinkDiffusion2 ай бұрын
Thank you, glad you liked it! You can try out ThinkDiffusion today (free trial) here: bit.ly/3XfyWzt 🌟
@videoaccaunt2 ай бұрын
deep dive....3:25 min
@ThinkDiffusion2 ай бұрын
It was almost a year ago 😅 should we record an updated version that is more in depth? :)
@letsgoletsgoletsgoletsgoletsgo2 ай бұрын
i have a very specific question , i am a photgrapher for a shoe company , i shoot a lot of white background ecommerce photos of the products . ideally i want to input those ecommerce photos into an AI platform , generate a new image with a fantastic background WHILE retaining 100% how my product looks like . is this possible with img2img ?
@ThinkDiffusion2 ай бұрын
Totally! Check out this tutorial: learn.thinkdiffusion.com/bria-ai-for-background-removal-and-replacement/ we can also use IC-light to relight the subject to match the new background: x.com/thinkdiffusion/status/1806347642550600004 Hope this helped:)
@letsgoletsgoletsgoletsgoletsgo2 ай бұрын
@@ThinkDiffusion thank you so much, a second question, is there a paid service I can use on line ?
@erutan1082 ай бұрын
Thank you so much for having this session with Sebastian! More like this please ;)
@ThinkDiffusion2 ай бұрын
Thanks for the positive feedback! More on the way, very soon 👀
@erutan1082 ай бұрын
@@ThinkDiffusion thank you! Looking forward to it 👀 and can’t wait to try it on your platform 😁
@Cu-gp4fy2 ай бұрын
seems to have more control than mid journey nice!
@ThinkDiffusion2 ай бұрын
Yes, with Stable Diffusion you have way more control than any other AI image generators:)
@Vanced2Dua2 ай бұрын
Mantap... Lanjutkan tutorial A1111 saya sangat menyukai
@ThinkDiffusion2 ай бұрын
Thank you for the positive feedback, more tutorials coming every week!
@jmcasler15123 ай бұрын
Sweet!
@ThinkDiffusion2 ай бұрын
😄😄
@whootoo11173 ай бұрын
Only black man you have must be ape. What a suppression to love of black men you have. It is reversing it so nobody know your obsession? What is this hate against black muscles, masculinity & power you have? Good luck with it. It is like Hulk of Marvel studios becoming a green figure muscular alien instead of a black man in 20th century. Insecurity of some male races?
@camchanimation3 ай бұрын
Where can I find the styles you're using?
@ThinkDiffusion2 ай бұрын
Hi there! They are available for Sebastian Kamph's Patreons:)
@Boxels3 ай бұрын
I'm hooked! How do I pay you to help me build a model to help me train the AI to make my style so I can animate my stories?
@ThinkDiffusion3 ай бұрын
Hi there! You can contact us here: www.thinkdiffusion.com/studio Have a great day!
@Boxels3 ай бұрын
so helpful. keep it coming. how do we get a tutorial on animating for story telling like my channel ? kzbin.infoiZ_UnzQTOVg
@ThinkDiffusion3 ай бұрын
Awesome to hear you found it helpful! Sadly I am not able to open the link you attached...
@Boxels3 ай бұрын
exactly what i've been looking for. knowing its possible - but getting lost in tutorials, this is a great help so far!
@pescemagicquo29003 ай бұрын
not about the video,but how can i delete my account?
@ThinkDiffusion3 ай бұрын
Hi there! Please send an email to [email protected] and we'll take care of it!
@HacknSlashPro3 ай бұрын
so sebastian kamph channel is abandonned now?
@ThinkDiffusion3 ай бұрын
Not at all!
@jonahoskow74763 ай бұрын
wait.. what does an embedding do? Thanks!
@ThinkDiffusion3 ай бұрын
Hey! Stable diffusion embedding is a method that helps to reduce the complexity of large sets of data by simplifying them into fewer dimensions while keeping the important patterns intact. It’s particularly good at handling noisy data, making sure that similar pieces of data stay close together in the simpler version. This technique is useful for visualizing and identifying clusters or groups in your data. Hope this helps!
@MsMinnyMod4 ай бұрын
Thanks for the tutorial. I will try to make an anime movie with my kids following your video. We'll see how it goes.
@ThinkDiffusion3 ай бұрын
That's awesome, keep us updated!
@cfx34 ай бұрын
Good video, I feel like the audio track of your voice could have been a bit louder compared to the background music.
@ThinkDiffusion2 ай бұрын
Thank you! Yes, we'll be working on fixing that for the future videos:)
@ShengzhuPeng4 ай бұрын
Hi! I’m interested in a business collaboration. Could you please share your email Thanks
@Designing-hc5pz4 ай бұрын
when will you gonna release new model and it's about sd3 or sdxl?
@matheusreisfernandes84634 ай бұрын
When will the When will the pony diffusion be available?
@deefster744 ай бұрын
why 1.5 vs XL? seems odd to release a new workflow on an old model
@bgtubber4 ай бұрын
Audio is too low. I could barely hear anything. :(
@ThinkDiffusion4 ай бұрын
So sorry about that! It'll be fixed next time we do a workflow video like this:) Hopefully you're still able to hear enough to try it out!
@bgtubber4 ай бұрын
@@ThinkDiffusion Yes, I had to crank it up and then it's audible. Thanks!
@Pellapoo4 ай бұрын
He named the workflow "Mein Kamph"
@MultiOmega19114 ай бұрын
bruh
@Cu-gp4fy4 ай бұрын
Thx for the workflow ! are there any other prepicessors u reco? noticed canny but not scribble
@ThinkDiffusion4 ай бұрын
Scribble is great too, especially with the xdog preprocessor!
@Mel-vc7cs5 ай бұрын
Hello, I don't see all the sampling methods you are showing in this video. for example, I just see DPM++ 2M not DPM++ 2M Karras. Any idea why? Thanks
@Mel-vc7cs5 ай бұрын
I don't have the roop option. can you explain why?
@rukakun5 ай бұрын
sorry if this is a stupid question, I'm a complete newbie. But, how does Think Diffusion differ from using stable diffusion? I'm (getting) familiar with Stable Diffusion, and am struggling to find what I can use ThinkDiffusion for other than generating images...