I got a runtime error: the size of tensor a (120) must match the size of tensor b (96) at non-singleton dimension3. when trying to use Euler SMEA Dy at the sampling method. any clue how to fix it this problem?
@SuccessNowBlueprints8 сағат бұрын
I am super perplexed by SDXL; its benefits seem to be tradeoffs, not upgrades. I basically learned 1.5 from your wisdom - huge thanks and praise, and I'd like to think I've gotten quite good at it, even on a 10y old potato. I notice the results of in-painting are weaker; you can't just force more pixel density into an area like 1.5. There seems to be this very common theme where blur and distortion are just accepted around background elements, where there is massive loss of detail for, dare I say, a more artistic look instead of something polished and sharp? I guess we are prioritizing foreground elements to go large and saying forget the rest... Between the refiner, vae, high res fix, and low support for ultimate upscale in a111, is it forcing you to jump to comfyui? I feel like this is a backwards progression. I have renders on 1.5 that are not even just as good; they are exponentially better. The one thing I do love about SDXL is the natural and very literal interpretations of prompting. Body position and senes do seem more dynamic and exciting, which I love, but dare I say I am tempted to start in SDXL and use it as an openpose to go back to 1.5 for my quality. You will likely get what you put in even without the major support of loras, so it feels like it has more scope. I just got a brand new PC with a GForce RTX 4070TI, and I thought I'd take the plunge and try the more intensive interface as SDXL was a 2 hour render and is now 14 seconds. Am I about to throw in the towel as it feels like way more trouble than its worth? Underwhelmed and kinda disapointed. Am I crazy, or does anyone else feel like this?! Perhaps, SDXL is a is a good idea on paper, but a different flavor of ice cream may not be the next big thing -idk maybe I am missing some important piece. I would be very curious what people's opinions are. With some great prompts and a couple good loras, you can drop a 3000x3000 image on a 1.5 set of dimensions to.3-.4 with a CFG of 7.5-10 and churn out how high the res masterpieces starting at 768x768, which may only need a little inpainting and minor upscaling. Running 768sx1344s regardless of the inputs I feel like I want to throw up in my mouth, getting results like your first day of stable diffusion without embeddings, loras, and using base models. It concerns me that the only refiner I seem to find for SDXL is the base model, which is universally garbage; running images through it seems to make things even worse, even at 80% or 70% shift. I am just wondering if anyone else feels the same? Just like 1.5 when you start to big it overwhelms the AI and by that standard I think SDXL and even the upcoming cascades are just flawed, start small get something amazing scale appropriately. When I look at top renders on CIVITAI a lot of the SDXLs don't really impress me, the hyper-realism is better ish, but all other styles just as good or worse, and often in far less quantity. Can 1.5 outperform SDXL as more veteran'd and polished system? I am curious to try for another week or two, and then I might just hold off or a year until it gets dialed in more. Thoughts and feedback welcome, really not sure what to think.
@yosribengaidhassine929917 сағат бұрын
I'm trying to swap a face in image , but it gives me the same image why ?
@user-xj3fg3bd3d17 сағат бұрын
what should I do if there's a message that some node types were not found
@aaronhkg20 сағат бұрын
The bg music is too loud... either you speak louder or just remove it totally.
@LivLiveLegendsКүн бұрын
no depth files to download..page now dont have links
@fitnesswindКүн бұрын
How can I change the same thing in video
@rooqueen6259Күн бұрын
Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c
@apgamer4053Күн бұрын
3:28 can i take that as legal advice XD
@soulife8383Күн бұрын
9:40 ImportError: cannot import from 'undefined' from 'pydantic.fields'
@soulife8383Күн бұрын
Found this solution under someone with the same error installing A1111: Go to your stable diffusion webui folder, open requirements_versions txt, add line pedantic==1.10.11
@weishanlei8682Күн бұрын
I am sure this tutorial is also AI generated!
@sebastiankamphКүн бұрын
Probably, yes!
@soulife83832 күн бұрын
I'm just getting started with my journey into Stable Diffusion. 1 thing that isn't super obvious to me is, I understand that Forge, A1111, etc, are just browser front ends for SD, but when you follow these install guides, does it bundle with SD? Or am I supposed to install SD as per their website and then follow these guides? Edit: and also, is it safe to have multiple front ends installed?
@sebastiankamphКүн бұрын
You can install multiple front ends. SD is just that .safetensors file the front end reads. So you're good with just installing a1111, Forge, Comfy etc.
@ghstproject3 күн бұрын
Let's be honest here. We are not looking to add any clothing at all to the pictures we are working on LMAO!
@vurmamivurdu3 күн бұрын
Can this somehow work with Fooocus?
@EvokAi3 күн бұрын
Can 2gb dedicated gpu handle this?
@sebastiankamphКүн бұрын
It will be tough. Check out cloud solutions like ThinkDiffusion
@AD-Dom3 күн бұрын
Ohhh, but it’s absolutely way more powerful than that blob. This is the kind of thing that is for real designers…
@nahhkan43303 күн бұрын
I am eagerly waiting for the video tutorial
@evanmars75473 күн бұрын
Imagine if this guy was your teacher, and the way he's teaching is exactly like this
@sebastiankamphКүн бұрын
I'll be your KZbin teacher, np
@beatz_on28244 күн бұрын
Is it available for Android
@ErmilinaLight4 күн бұрын
Thank you very much for the tutorial! I wonder what should i fix if some downloaded models simply give a grey box result instead of an actual image?
@Sarubotai4 күн бұрын
Yeah physically impossible to extract this file lmao... something not right with it regardless of what unzipper you use.
@Warrioroffaith114 күн бұрын
How do you create a consistent body? cause I know how to do the face swap but I can't seem to get a consistent body though.
@gohan20914 күн бұрын
How does this compare to Rope? (Hillobar)
@sebastiankamphКүн бұрын
More options here. Similar base models.
@BWcapture4 күн бұрын
It's been 20 minutes and still my prompt isn't finished generating.
@ErmilinaLight4 күн бұрын
THAAAAAANK YOUUUUU!!!!! Do you allow to use your styles doc to generate images for commercial purposes?
@sebastiankamphКүн бұрын
Happy to help! Yes, go for it :)
@TheRedflash7774 күн бұрын
HI, I have 16 vram but after pressing Generate it takes forever for Cascade to generate. Why?
@sebastiankamphКүн бұрын
Cascade is really slow tbh. Is it worth it? 50/50
@hishadman5 күн бұрын
Can we do it on video?
@gu98385 күн бұрын
thanks this helped a lot!
@sebastiankamphКүн бұрын
Glad to hear!
@AD-Dom5 күн бұрын
Still no SDXL segmentation? That's highly frustrating.
@yo-fd9jy5 күн бұрын
liked and subscribed 😲
@sebastiankamphКүн бұрын
Happy to hear it, welcome aboard!
@soulife83835 күн бұрын
EDIT: Maybe I should have posted this at the end - I've been reading and reading and reading and still have so many questions but I think installing and playing around will answer them. It's my understanding that A1111 is just a front-end and the backend/foundation is Stable Diffusion 1.5. Silly question thought, does this front end come bundled with the backend? If not, do you have a tutorial for installing SD 1.5? Thanks!
@sebastiankamphКүн бұрын
It's the front-end yes. The model (.safetensors) file is Stable Diffusion. You're using a1111 to access it. Sort of.
@yosribengaidhassine92995 күн бұрын
which is better ReActor or IPadapter
@arslalah5 күн бұрын
my images are grey
@nitesharora42515 күн бұрын
Why My generations are taking time now? Initially it didn't now it is taking after changing the Live preview, what do I keep it?
@RECORD_LAiBEL5 күн бұрын
Super cool, I'm making smooth jazz/fusion tracks and I'm having a ton of fun!
@sebastiankamphКүн бұрын
Nice, happy to hear it! :)
@user-nl2kr1nk9s6 күн бұрын
Lol "Euler" pronunciation is correct but the "C" in centaur is pronounced "S" like in "centenary" or "centipede"
@bigglyguy84296 күн бұрын
This red goes to this red here, the vae thing... WTF? How do I make an image? I don't even see a 'go' or 'generate' button? This is the most tarded GUI I've ever seen in my entire life!
@bigglyguy84296 күн бұрын
Nodes, workflows, what are we even talking about? Why is it so hard to find a vid that actually explains this stuff, instead of expecting you to already know what a "node" is? I have no idea at all what a node is, none, zero, not a clue.
@sebastiankamphКүн бұрын
If you're not a fan of node based systems, I would recommend checking out automatic1111, Fooocus or Forge.
@bigglyguy8429Күн бұрын
@@sebastiankamph I already have Forge up and running, plus some other things that has this comfy stuff running in the background. I still don't know what a node is
@sebastiankamphКүн бұрын
@@bigglyguy8429 the little square thing with the text in it that all the lines connect to.
@bigglyguy8429Күн бұрын
@@sebastiankamph That's it? The boxy things? 0_o
@angelalmalaq6 күн бұрын
dont work for video ! usless
@christiannilsen28357 күн бұрын
can we get them to blink also?
@AnbuVega7 күн бұрын
Just so I'm not misunderstanding when you say your image prompts are free, is it just because it's an old video and now they are not actually free anymore? It's asking me to subscribe. I don't mind subscribing I just want to make sure. Maybe you have some free and some are locked behind a paywall or are they all locked behind a paywall now? I notice this on a lot of your older videos. Thanks
@sebastiankamphКүн бұрын
You are correct, they used to be free and they are not anymore. Sorry for the confusion.
@user-zx6qs7bl5c7 күн бұрын
Stick to tech you’re not a comedian at all 😂 jeez
@dunne543217 күн бұрын
I drag an image into confyui but nothing happens....also comfy ui is not loading my models from my automatic 1111 path, is there anything I might be doing wrong? Thanks in advance
@sebastiankamphКүн бұрын
You can only drop images that were generated with Comfy
@handsomejack6727 күн бұрын
please cover Hyper SD
@MauricetePas8 күн бұрын
Don't forget to enable your Controlnet module which very subtlety happens here. 4:34 And I was just wondering why my images didn't look anything like me. 😂 Other than that, great tutorial !! Thanks! 👍
@sebastiankamphКүн бұрын
Glad you found it! :D
@kunalmhatre79988 күн бұрын
I am getting an error message after running webui-user.bat -creating venv in directory -Unable to create venv in directory Exit code 1 stderr C:\Python27\python.exe: No module named venv Launch unsuccessful. Exciting Please help me with this
@cosmicstuff448 күн бұрын
So is this going to be available to install locally through github like the other SD and SDXL releases?
@sandnerdaniel8 күн бұрын
Is there a scenario for Heun samplers? I always get very subpar results with Heun when testing pretty much anything. Maybe it is better for cartoons?
@nasimulogical8 күн бұрын
I wish you'd speak a little louder
@-Belshazzar-8 күн бұрын
but how can one install Forge over an existing 1111 install, so it will use the already hundreds of GB of models that are already in my hard drive?