Style2Image in ControlNet (T2I)

  Рет қаралды 72,214

Sebastian Kamph

Sebastian Kamph

Күн бұрын

Пікірлер: 183
@sebastiankamph
@sebastiankamph Жыл бұрын
The FREE Prompt styles I use here: www.patreon.com/posts/sebs-hilis-79649068
@chrisdixonstudios
@chrisdixonstudios Жыл бұрын
Dude, you are surfing the wave of Stable Diffusion tubularly in an endless summer on a perfect wave 🌊. Thanks for keeping us up to speed 🚤
@sebastiankamph
@sebastiankamph Жыл бұрын
Happy to be along for the ride! 🏄‍♀
@theaiplaybook
@theaiplaybook Жыл бұрын
I totally agree with you. With his videos, it's much easier to stay updated.
@marekpietrak8279
@marekpietrak8279 Жыл бұрын
Sounds like a prompt "surfing the wave of Stable Diffusion tubularly in an endless summer on a perfect wave"
@chrisdixonstudios
@chrisdixonstudios Жыл бұрын
@@marekpietrak8279 Yes, it is! Here is Sebastian ..having fun navigating for us all: Dude, you are surfing the wave of Stable Diffusion tubularly in an endless summer on a perfect wave 🌊. Thanks for keeping us up to speed 🚤 Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 683630813, Size: 768x768, Model hash: ad2a33c361, Model: v2-1_768-ema-pruned Orrr a little more like young Bob Ross with your quote>>>> surfing the wave of Stable Diffusion tubularly in an endless summer on a perfect wave Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 10228519, Size: 768x768, Model hash: ad2a33c361, Model: v2-1_768-ema-pruned Imagine the coffee house talk from a group of A.i. enthusiasts and their vernacular after a few cups of espresso 🤩🍮🍻
@sheriff2077
@sheriff2077 Жыл бұрын
the dad jokes are evolving faster than the A.I itself
@sebastiankamph
@sebastiankamph Жыл бұрын
Gotta fall back on something when AI takes over.
@chrisdixonstudios
@chrisdixonstudios Жыл бұрын
@@sebastiankamph soo what did da nuclear scientists say when dey finally achieved a safe fusion reaction? ...we got da Stable Diffusion!!! You may use that one anytime 😉
@aiv0t
@aiv0t Жыл бұрын
@@sebastiankamph dadjokes by ChatGPT when?
@aiv0t
@aiv0t Жыл бұрын
Already started cause I got curious: Why did ChatGPT take up gardening? Because it wanted to become a sage AI!
@sebastiankamph
@sebastiankamph Жыл бұрын
😂
@steveschreiner7444
@steveschreiner7444 Жыл бұрын
For those who didnt know like my self the multicontrolnet tabs are setup at settings -> controlnet -> and set the Multi ControlNet: slider on two
@Herbstleid
@Herbstleid Жыл бұрын
Hi, where do I get "clip_vision" ?
@k-1072
@k-1072 Жыл бұрын
searching for it as well
@middleman-theory
@middleman-theory Жыл бұрын
I just get random images with no likeness at all. I'm using A1111 and latest controlnet. Also, I don't see regular words like "depth" and "Style", I get three different versions of depth (depth_leres, depth_midas, depth_zoe) under preprocessor, and for Model I only see (coadapter-style-sd15v1). I tried them all, but nothing works. I'm not putting anything in the prompt or negative, just trying to get it work with the existing image.
@duplicatemate7843
@duplicatemate7843 Жыл бұрын
any fix bro? same for me :(
@anatoliysavitskiy6371
@anatoliysavitskiy6371 Жыл бұрын
I'm afraid, I also fail to see the resemblance in the final result. The only thing that is preserved from the original picture is the posture.
@rijujakhar8771
@rijujakhar8771 4 ай бұрын
same
@lujoviste
@lujoviste Жыл бұрын
Does anyone else get some random image instead of chanign styles?
@inv_der2350
@inv_der2350 Жыл бұрын
For me clip_vision does nothing at all. Seems to be happening to a lot of people. Could you share a solution?
@hentaioniv1167
@hentaioniv1167 Жыл бұрын
try to use non Euler(a) sampling method like DDIM for example(works for me).
@lizng5509
@lizng5509 Жыл бұрын
Hi, I have same problem. Have you figured it out? Thanks.
@SoundGuy
@SoundGuy Жыл бұрын
I don't see clip_vision of a few others in the preprocessor. what do i do to get them? also i didn't see any yaml files bring downloaded is that related?
@MonologueMusicals
@MonologueMusicals Жыл бұрын
I don't know what I'm doing wrong. I get identical images whether the clip_vision style is on or off. It has zero effect.
@42na4ever
@42na4ever Жыл бұрын
Check Seed, it shoild be -1
@dexter0010
@dexter0010 Жыл бұрын
how did you put the add lora and hypernetwork as a dropdown up top?? edit: also where do i find clip vision? i haven't found it yet i have everything downloaded
@JohnVanderbeck
@JohnVanderbeck Жыл бұрын
Settings->User Interface->Quicksettings list
@dexter0010
@dexter0010 Жыл бұрын
@@JohnVanderbeck thanks! what do i add there?
@JohnVanderbeck
@JohnVanderbeck Жыл бұрын
@@dexter0010 just add the name of any settings control you want to be in the top quick bar. Easiest way to find the name of a control is just change the setting then apply and the name of the control will be listed at the top where it shows the changes.
@alenwesker9552
@alenwesker9552 Жыл бұрын
So powerful, I was searching for the t2i color adapter, but the other models are way more powerful and useful than that.
@ThePhillShow
@ThePhillShow Жыл бұрын
This isn't working at all for me. No idea what's going wrong but i'm pretty much just getting noise. Enable is checked on both images. Has this process changed?
@gigaganon
@gigaganon Жыл бұрын
I did what you did but it gives me absolutely no result, i just get a jumbled mess of textures, not even a shape is left, i don't know what i'm doing wrong
@KhangTrần-m3x
@KhangTrần-m3x Жыл бұрын
I did exactly what you instructed but when I hit render it doesn't work, it gives me a random image Plz help me
@HikingWithCooper
@HikingWithCooper Жыл бұрын
Another day, another leap forward. Thank you for bringing us along!!
@sebastiankamph
@sebastiankamph Жыл бұрын
Happy to have you along for the ride! 🌟
@lawrence9239
@lawrence9239 Жыл бұрын
I know right? What a time to be alive!
@miguelarce6489
@miguelarce6489 Жыл бұрын
Hey, great video! Can't get to work control net on "text2img" it generates me random images, any help?
@aggroaperture
@aggroaperture Жыл бұрын
same issue and solution?
@cerspence
@cerspence Жыл бұрын
I just come for the jokes I don't even know what stable diffusion is
@Mimeniia
@Mimeniia Жыл бұрын
What do you call a horse stable that ensures an even spread of manure odour? Stable Diffusion
@KkommA88
@KkommA88 Жыл бұрын
Once again a useful video! Thanks Seb!
@sebastiankamph
@sebastiankamph Жыл бұрын
My pleasure!
@carlosramon6102
@carlosramon6102 Жыл бұрын
has anyone got the fp16 safetensors version from webui working? the one from tencent shown in the video works, but the webui version seems to have zero influence on the image generated.
@SomeAB
@SomeAB Жыл бұрын
The original hugging face for this now have a 'co-adapter' .. please explain or do a video on that.
@guycohen1958
@guycohen1958 Жыл бұрын
Can you please update this with the latest controlnet 1.1, the naming of the adapters are different now. thank you
@tobiasroth8169
@tobiasroth8169 Жыл бұрын
guys i finally discovered the problem a lot of ppl from this comment section had (including myself) - you need to put all control net models to this path "stable-diffusion-webui\extensions\sd-webui-controlnet\models" and NOT to this path -> "stable-diffusion-webui\models\ControlNet" :)
@LeeeroyDex
@LeeeroyDex Жыл бұрын
Hello sir, the top of the UI "SD VAE", "ADD Lora prompt", "Add Hypernetwork to prompt" what are these 3 things?
@deema7345
@deema7345 Жыл бұрын
yeah,thats pretty fun feature to play with
@therookiesplaybook
@therookiesplaybook Жыл бұрын
Where can.I find clipvision? I have the latest control net 1.1 and it's not in there.
@JanKadlec
@JanKadlec Жыл бұрын
Same.
@zizyip6203
@zizyip6203 Жыл бұрын
T2IA
@sarpsomer
@sarpsomer Жыл бұрын
Please don't do time (frame) skips while editing your video. Because there are lots of tweaks, sliders, dropdown menus; it is really hard for us to follow the video. I had to stop, go 10 secs back stop, etc... to get the workflow. I mean the video is 6:32 but it took me x2 time because of the skips. Don't get me wrong; learning lot from you as well as from this video. Thanks for everything.
@sebastiankamph
@sebastiankamph Жыл бұрын
Thanks for the tip! The reason I cut is I don't want people to get bored 😊
@Ur3rdiMcFly
@Ur3rdiMcFly Жыл бұрын
@@sebastiankamph Gotta get some more energy in your voice man!
@ahsookee
@ahsookee Жыл бұрын
@@Ur3rdiMcFly I disagree, there's already enough KZbin content with too much energy. I just recently saw someone under a different video of his complement the relaxed presentation style and I agree, it's a lot more soothing watching something like this for technical content.
@TheAiConqueror
@TheAiConqueror Жыл бұрын
@@sebastiankamph Never bored 🫡
@marekpietrak8279
@marekpietrak8279 Жыл бұрын
@@sebastiankamph We've got the arrow keys if need be :D
@matthewma7687
@matthewma7687 Жыл бұрын
Great sharing, I want to follow this video to do it again but found that I don't have the clip version, how can I get it, is the clip version integrated in the latest sd webui contraolnet?
@matthewma7687
@matthewma7687 Жыл бұрын
Reinstalled sd webui contraolnet, It was fixed. thank you
@riccardobiagi7595
@riccardobiagi7595 Жыл бұрын
@@matthewma7687 Hi! Can I ask you how you reinstalled sd webui controlnet? I don't want to mess up :D
@jurandfantom
@jurandfantom Жыл бұрын
small note, canny from T2I works (300mb can replace 700MB), and vision as well, but others looks like not work at all? (i removed color as that one dosn't work as well) do your tests and then delete 700mb files
@thedevilgames8217
@thedevilgames8217 Жыл бұрын
do you know how to fix CUDA_LAUNCH_BLOCKING=1 from hires?
@AgustinCaniglia1992
@AgustinCaniglia1992 Жыл бұрын
this is simply amazing
@74mihain
@74mihain Жыл бұрын
RuntimeError: shape '[1, 64, 1]' is invalid for input of size 0 🤷🤷🤷
@FrancoANioi
@FrancoANioi Жыл бұрын
You are the boss, you know that?
@3oxisprimus848
@3oxisprimus848 Жыл бұрын
I think he does
@sebastiankamph
@sebastiankamph Жыл бұрын
No, you're the boss! 😘
@tobinrysenga1894
@tobinrysenga1894 Жыл бұрын
I was getting much worse results until I switched to the deliberate_v2 model that I noticed you were using. What is that model supposed to be? I couldn't find any info on it, just happened to find it for downloading.
@TMaekler
@TMaekler 7 ай бұрын
Nice. Wondering if there is a Style Adapter for SDXL? Couldn't find one anywhere...
@flyashy8397
@flyashy8397 Жыл бұрын
I am trying to follow this tutorial to the T but all I get are random images that have no resemblance to the two control images. I have a person and a comic style as the two control images and I get landscapes etc as the generation result. It is as if SD is ignoring the controlnet altogether and generating promptless images. Any idea what could be going wrong?
@sebastiankamph
@sebastiankamph Жыл бұрын
Honestly, regular ControlNet models are more consistent than T2I. So can try working with those also.
@flyashy8397
@flyashy8397 Жыл бұрын
@@sebastiankamph Thank you so much! I'll try those out. Cheers!
@edwhite207
@edwhite207 Жыл бұрын
Great videos and jokes! Where did the aspect ratio buttons come from?
@sebastiankamph
@sebastiankamph Жыл бұрын
You can find that extension in the extensions tab. Aspect ratio something something.
@TheAiConqueror
@TheAiConqueror Жыл бұрын
Seb the man! 💪🤴🏼
@sebastiankamph
@sebastiankamph Жыл бұрын
No, you're the man! Thank you my friend 💲💲💲. Maybe soon I can invest in some stuff to have in the background of the videos too. Did you like the tree? 😁
@TheAiConqueror
@TheAiConqueror Жыл бұрын
@@sebastiankamph yes the tree is cool, a bonsai would be cool, would go with your quiet videos. would round off the mood. 😁🫡
@sebastiankamph
@sebastiankamph Жыл бұрын
@@TheAiConqueror I love it!
@androidgamerxc
@androidgamerxc Жыл бұрын
i am getting unknown error on all of the extensions
@sebastiankamph
@sebastiankamph Жыл бұрын
I recommend you remove them and reinstall from the extensions tab. And update to latest auto1111
@LilCurlyBlonde
@LilCurlyBlonde Жыл бұрын
How did you get the ControlNet tabs to be side by side instead of one after another ? It would really help a lot in terms of space on the page and keeping everything neat and in focus.
@sebastiankamph
@sebastiankamph Жыл бұрын
Update to the latest version (I show how in the video)
@LilCurlyBlonde
@LilCurlyBlonde Жыл бұрын
@@sebastiankamph Thank you, it's a God's sent that they thought about it.
@РоманСырватка
@РоманСырватка Жыл бұрын
I don't know what I'm doing wrong, but I always end up with either b/w or sepia images. Changed the weight and navigation settings on both tabs - does not help (( Maybe it only works on square photos? I just wanted to change the vertical photo 720x1280.
@AZTECMAN
@AZTECMAN Жыл бұрын
Not sure if this is helpful: - I was getting black and white images when I used the light direction for the image to imagine - one solution is to increase denoising strength to 90 or 95% - you can also put 'grayscale' in the 'negative prompt'
@РоманСырватка
@РоманСырватка Жыл бұрын
@@AZTECMAN, I didn't use any light sources, like the prompt and the negative. I did everything, as in this video, I put a photo of people and used an anime picture in the second model. As a result, it is not done in the anime, and the colors also disappear ( Here the whole point is to change the style of the original picture without resorting to a prompt.
@Mocorn
@Mocorn Жыл бұрын
I must admit, I'm having some problems making use of this style transfer. I'm possibly going about this completely wrong but I'm starting with an image of a person and want to apply a style but retain the likeness of the person. I feel like the likeness gets lost in the process.
@sebastiankamph
@sebastiankamph Жыл бұрын
You'll have to play with the guidance start setting. It is however, very finicky.
@Mocorn
@Mocorn Жыл бұрын
@@sebastiankamph yeah, I played around with this some more after my comment and got closer but I agree, it is quite finicky.
@EvaKaza
@EvaKaza Жыл бұрын
sorry where did you get clip vision from?
@ufukzayim6689
@ufukzayim6689 Жыл бұрын
clip_vision does not appear on the list.What can I do?
@zizyip6203
@zizyip6203 Жыл бұрын
T2IA clip vision
@TheMaxvin
@TheMaxvin 11 ай бұрын
Super, and what about IP-adapter?
@xellostube
@xellostube Жыл бұрын
I have 2 problems: 1 - I get black and white creations 2 - The pose is similar but the character is unrecognizable (I'm trying to use this technique to sylize a couple of portraits but the person in the creation is way more different)
@killabook
@killabook Жыл бұрын
I have exactly the same problems. does not look like the photo
@K-A_Z_A-K_S_URALA
@K-A_Z_A-K_S_URALA Жыл бұрын
не работает!
@_perp
@_perp 4 ай бұрын
i get this error no matter what i try 'AttributeError: 'dict' object has no attribute 'shape'" any ideas?
@76abbath
@76abbath Жыл бұрын
Thanks a lot for the video! Your channel is very good!
@sebastiankamph
@sebastiankamph Жыл бұрын
Thank you kindly! 😘
@Valerija.M
@Valerija.M Жыл бұрын
Where did the preprocessor files come from? They don't exist and they didn't appear
@SoundGuy
@SoundGuy Жыл бұрын
i have the same problem
@macbetabetamac8998
@macbetabetamac8998 Жыл бұрын
Does it work better with any particular SD models?
@sebastiankamph
@sebastiankamph Жыл бұрын
I tried many and it worked well with all I tested. Faces worked best for me.
@deema7345
@deema7345 Жыл бұрын
also used to mess around with it in img2img
@lucan42
@lucan42 Жыл бұрын
is there a way to input a directory and let it calculate for more frames automatically, just like in the normal img2img?
@Tymon0000
@Tymon0000 Жыл бұрын
Why did ChatGPT decide to join a gym? It wanted to get better at processing weights.
@takeanappan
@takeanappan Жыл бұрын
Hi, anyone got this issue "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'" ?? : ((
@takeanappan
@takeanappan Жыл бұрын
everything works fine until I update ContrlNet extension.. now couldn't even run webui...
@marvin6844
@marvin6844 Жыл бұрын
Is there a way to get it to act more like a filter, rather than a blend of the two images? I'd like to be able to maintain the exact likeness of a portrait while applying the new style onto it. For example, If i uploaded a photo of my face and used a black and white manga drawing as the style reference. It will morph my face. I just want it to look exactly like me but drawn using a manga pen. Is that possible?
@Joe-ce6cc
@Joe-ce6cc Жыл бұрын
Use this way Simply start with a fresh U.I, put your starting picture in the img2img tab, type in some promps about what u want, oil painting or whatever, play with the settings and voila. If u want your picture to be in the SAME style as your drawing, u gotta train a modele using multiples drawing of yours so the I.A can understand your style, then apply that lora to your promps on top of your original photo in the img2img and start generating Same process for doing slideshow videos
@jon2478
@jon2478 Жыл бұрын
@@Joe-ce6cc Do you know what model is best for this?
@42na4ever
@42na4ever Жыл бұрын
For some reason, it doesn’t work for me as well as it does for you, although I repeated all the steps exactly to the point. If you leave everything as you have, then the pose is taken from the second picture, and not from the first, where the depth is turned on, if you reduce the impact force of the second picture, then the style disappears. Unclear :(
@sebastiankamph
@sebastiankamph Жыл бұрын
I had lots of issues when testing this so I am not surprised 😅
@herval
@herval Жыл бұрын
same here
@paolovolante
@paolovolante Жыл бұрын
Thank you for your videos. I'm a Mac user and so I'm out of the games because all developments are done in Windows or Linux (I suppose). I have a Colab payed subscription, though. Is there a maintained ControlNet/Stable Diffusion implementation I can remotely use, as far as you know?
@aymanwadi5085
@aymanwadi5085 Жыл бұрын
I am using this after 4 months of this tutorial and it is a total failure..... is there something updated here or there that make this not working??
@BadCat667
@BadCat667 Жыл бұрын
I can't handle that arning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference
@arothmanmusic
@arothmanmusic Жыл бұрын
For some reason my output image doesn't look like the source models. I have a photo of a woman on one side and a painting of a woman on the other, but my output is some seemingly random image that bears no resemblance to either of them... I get animals, landscapes... what am I missing?
@coda514
@coda514 Жыл бұрын
What did the grape say when it got crushed? Nothing, it just let out a little wine. Seriously, the tools at our disposal are unbelievable.
@sebastiankamph
@sebastiankamph Жыл бұрын
Hah, that's a good one 😁
@Rscapeextreme447
@Rscapeextreme447 Жыл бұрын
Amazing!
@evelynintrance
@evelynintrance 11 ай бұрын
hey, the 1.5 models (canny/depth ext) are much smaller in this download location than the other location you referred to in another controlnet video. what is the difference? will they work the same if i get them all from here?
@bajirot
@bajirot Жыл бұрын
0:33 I got chills. Anyway great content as usual, thank you.
@sebastiankamph
@sebastiankamph Жыл бұрын
Glad you enjoyed it! 😊😘
@cyril1111
@cyril1111 Жыл бұрын
been trying to play with it for two days but there's a problem with Mac and can't play with it yet :( Opened a bug request on Github and waiting for fix. hopefully soon...
@sebastiankamph
@sebastiankamph Жыл бұрын
😥 I feel you
@dadabranding3537
@dadabranding3537 10 ай бұрын
I am having trouble replicating this in ComfyUI. Can you advise? Or anyone ?
@didiernaimdefli
@didiernaimdefli Жыл бұрын
you are the boss
@pixeljauntvr7774
@pixeljauntvr7774 Жыл бұрын
Do the images you feed Contronet need to specifically be PNG files with SD data embedded? Or does any old jpg work?
@WillFalcon
@WillFalcon Жыл бұрын
there is no "clip vision"
@talessin
@talessin Жыл бұрын
what you using for ratio buttons for sizes?
@johncressmanci
@johncressmanci Жыл бұрын
I have been waiting for something like this! I just wished it was a little better and consistent.
@herval
@herval Жыл бұрын
I'm not sure if I'm doing something very wrong - followed the same steps, but the image I get out doesn't have anything to do with the input. It's almost like the first model is getting ignored...
@sebastiankamph
@sebastiankamph Жыл бұрын
T2I is really finicky tbh and I'm not sure if it's buggy or not. Sometimes I had to restart everything when my it stoppes for me.
@rjhfsv8564
@rjhfsv8564 Жыл бұрын
Bummer, Not sure why but its not downloading the yaml files with the models and I can't see them anywhere. Any thoughts on finding/getting them?
@sebastiankamph
@sebastiankamph Жыл бұрын
Try to start everything up and test a render and you should get them.
@MABtheGAME
@MABtheGAME Жыл бұрын
hey mate, I m getting random images not like yours, completely random
@kernsanders3973
@kernsanders3973 Жыл бұрын
The examples in your thumbnails are fantastic but the examples you produce in the video looks almost as bad as the ones im getting. Would have rather wanted to see how you accomplished the examples in your thumbnail. So far just like your examples you are producing, its just producing a mess on my side. Not really transferring style. Almost better to just train a lora model and use normal img2img with controlnet is a much better option than this. Unless im missing something and there is a secret setting for the examples in the thumbnail to actually get something that is decent.
@deimantassmeledis7567
@deimantassmeledis7567 Жыл бұрын
Is it just on people faces or can you do it on animal, objects as well?
@sebastiankamph
@sebastiankamph Жыл бұрын
You can do it on anything, but I found that faces provided good consistent results.
@havemoney
@havemoney Жыл бұрын
Controlnet0 + Controlnet01, not werk with amd :(
@juanom2903
@juanom2903 Жыл бұрын
Anyone knows where is the Restart button? Thank you all!
@AlphaNature
@AlphaNature Жыл бұрын
Thanks
@larryboles5064
@larryboles5064 Жыл бұрын
I'm trying this out and not having much luck with it. The results tend to be pretty terrible. I wonder if it might be because I'm using the safetensors pruned controlnet models instead of the full size ones.
@joseluisdelatorre3440
@joseluisdelatorre3440 Жыл бұрын
For image generation is the same the big models are for training and merging.
@TheAlgomist
@TheAlgomist Жыл бұрын
I choose YOU, to teach me this. Thank You 🙏
@sebastiankamph
@sebastiankamph Жыл бұрын
Haha, you're welcome! 😁
@Silversith
@Silversith Жыл бұрын
Wouldn't that work well for consistent characters?
@Oxes
@Oxes Жыл бұрын
can you get a google collab version for this style to image?
@ShawnFumo
@ShawnFumo Жыл бұрын
If you have a simple colab that copies the original ControlNet models, should be able to copy paste the new ones too.
@chariots8x230
@chariots8x230 Жыл бұрын
It’s interesting, but it seems to change the details of the character’s appearance a bit too much when changing the style. For example, the hair is one color in the original image, but on the output image, the hair contains multiple colors. Also, the outfit becomes different too. And the background seemed to change as well.
@cesar4729
@cesar4729 Жыл бұрын
All that have very easy solutions tbh.
@sebastiankamph
@sebastiankamph Жыл бұрын
You can work around that in multiple ways. One would be to prompt things in, which is probably the quickest if that works for your particular image.
@Tymon0000
@Tymon0000 Жыл бұрын
The t2 style model works for me, and I tried t2 color: RuntimeError: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=257 is not divisible by 8 and t2 canny RuntimeError: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=1 is not divisible by 8 Guess for them the workflow is completely different?
@herval
@herval Жыл бұрын
I get this w/ some models too (eg t2i sketch), still can't figure out how to fix it
@ackkipfer
@ackkipfer Жыл бұрын
What os your system? GPUs specially..
@sebastiankamph
@sebastiankamph Жыл бұрын
RTX 3080
@ackkipfer
@ackkipfer Жыл бұрын
@@sebastiankamph damm good gpu. My 1060 6gb crawls behind yours
@peterbelanger4094
@peterbelanger4094 Жыл бұрын
@@ackkipfer I also have a 1060 6gb, I keep getting 'CUDA out of memory' errors. I can't do this til I upgrade ☹ Can't do any CLIP vision or multi-control net. And slow at everything else. 45sec-1min for a 512x512 I avg 1 iteration a second on 512x512. Can't go above 1280x720 either, can't do full HD.
@CoconutPete
@CoconutPete 6 ай бұрын
now we have coadapters.. I've heard they are better than t2i
@morfolabs
@morfolabs Жыл бұрын
Nice!!!!!!!!!!!!!!!!!!!!!!!
@oceaco
@oceaco Жыл бұрын
Doesn't work for me
@oleksandrshkolnyi2227
@oleksandrshkolnyi2227 Жыл бұрын
what abut new video for beginners about set up from scratch ? I mean full set up cuz lot new changes has came and there are lot videos with them. But if u want to set up from scratch u need look for all of them and don't know what video is up to date what isn't anymore. I hope my point is clear. Thanks) I like ur videos)
@sebastiankamph
@sebastiankamph Жыл бұрын
I feel you! You could probably get to 95% with my Ultimate guide and then my first ControlNet video. With it all changing so quickly it's hard to make a comprehensive guide.
@oleksandrshkolnyi2227
@oleksandrshkolnyi2227 Жыл бұрын
@@sebastiankamph I understand, thanks
@bryan98pa
@bryan98pa Жыл бұрын
First!!
@peterbelanger4094
@peterbelanger4094 Жыл бұрын
Now you are last. 😅
@thirdshift7976
@thirdshift7976 Жыл бұрын
Good videos man, but that headrest is giving Stephen Hawking's vibes.
@sebastiankamph
@sebastiankamph Жыл бұрын
That's nice. He was a real mvp 🌟
@arothmanmusic
@arothmanmusic Жыл бұрын
Oh, good - I'm not the only one. I sort of assumed Sebastian was in a wheelchair. Not that it would matter one way or the other...
@K-A_Z_A-K_S_URALA
@K-A_Z_A-K_S_URALA Жыл бұрын
respect! из России брат... но у меня не работает(( издевательство
@gloorbit5471
@gloorbit5471 Жыл бұрын
All I get is naked women. Even when I use your negative sfw prompt.
@nothappyz
@nothappyz Жыл бұрын
Bro can you please turn off caret browsing with F7 on your brave? It's bugging me 💀
@Mnmnmnmnmmmnmnmnmnmnmnmnmnmnmn
@Mnmnmnmnmmmnmnmnmnmnmnmnmnmnmn Жыл бұрын
why does he raise his eyebrows like that? is it an indication of an AI-generated Joke?
@blackvx
@blackvx Жыл бұрын
What if you have fans who don't want to know anything about AI but just come here for the jokes...😅
@sebastiankamph
@sebastiankamph Жыл бұрын
I got you, almost all jokes are in the introduction now. No more hidden jokes inside the videos... or? 😏😘
@BassFuckingBlowRE
@BassFuckingBlowRE Жыл бұрын
I am doing something wrong. The style is not being applied as it happens in your first examples. I can't even get the cartoonish style applied. I am rewatching this video for the 5th time.
@sebastiankamph
@sebastiankamph Жыл бұрын
T2I is so finicky. Honestly, just use the regular ControlNet models (depth & canny) and learn them and you'll get more consistent results.
@BassFuckingBlowRE
@BassFuckingBlowRE Жыл бұрын
@@sebastiankamph Thank you, ma dude! I will
Paint&Text2Image - MultiDiffusion Region Control.
11:23
Sebastian Kamph
Рет қаралды 70 М.
How to make AI Faces. ControlNet Faces Tutorial.
14:42
Sebastian Kamph
Рет қаралды 81 М.
А ВЫ ЛЮБИТЕ ШКОЛУ?? #shorts
00:20
Паша Осадчий
Рет қаралды 8 МЛН
WOW! NEW ControlNet feature DESTROYS competition!
9:08
Sebastian Kamph
Рет қаралды 377 М.
A Stunning New AI Image Generator & New FREE AI Video!
14:02
Theoretically Media
Рет қаралды 57 М.
Revealing my Workflow to Perfect AI Images.
13:31
Sebastian Kamph
Рет қаралды 319 М.
Why You're Prompting Wrong, Do This (Per Leonardo AI)
18:31
metricsmule
Рет қаралды 47 М.
Easy Guide To Ultra-Realistic AI Images (With Flux)
13:12
Matt Wolfe
Рет қаралды 72 М.
The BEST AI Video Model Is Out & FREE!
12:44
Theoretically Media
Рет қаралды 175 М.