Followed a hard guide on the internet to install stable diffusion and they didn't even go over xformers. Learned about it a week later and this little line of code literally sped up my renders by like 45% and I'm not joking. Some renders was taking 3 minutes (I use a lot of lora's) and this cut it down to like 1:30 minutes some even faster. Thank you!!!!
@OlivioSarikas Жыл бұрын
#### Links from the Video #### Join my Live Stream: kzbin.infopuxTMSqC1bc SketchFab Japanese Alley: sketchfab.com/3d-models/japanese-street-at-night-fb1bdcd71a5544d699379d2d13dd1171 Buy me a Coffee: www.buymeacoffee.com/oliviotutorials Join my Facebook Group: facebook.com/groups/theairevolution Joint my Discord Group: discord.gg/XKAk7GUzAW
@santosic Жыл бұрын
I can't believe I never thought of using Depth to create scenery in a certain perspective!! Wow. That is actually a really good use of that feature. I've spent a long time trying to get the camera in the right spot, and I could have just done it this way. Thanks for the clever tip!
@martin-cheers Жыл бұрын
I second that.
@cekuhnen Жыл бұрын
I work with blender and from the 3D scene extract the depth image Super useful
@zoybean Жыл бұрын
Woo! I asked if you could post something like this before and you did it, awesome! Thank you so much!
@OlivioSarikas Жыл бұрын
Thank you. Did you talk to me about that in my last live stream?
@zoybean Жыл бұрын
@@OlivioSarikas Yep, that was me!
@jaredbeiswenger3766 Жыл бұрын
Wonderful tip. I've been trying to do this with sketches, but I'm excited to free myself from the tunnel background with buildings on either side stretching infinitely into the distance.
@MarkDemarest Жыл бұрын
FIRST, and I #CantWait to get into it! 💪🧠 -Thanks, Olivio!! 🎉
@aggressiveaegyo7679 Жыл бұрын
It's incredible. I didn’t even think that you can just take a screenshot of any scene in a movie. Or a video game. Take a recognizable map and create your own version. I think that we all have already taken a picture of our apartment and played with interior design, for example, classicism or Victorian style.
@nathanielblairofkew1082 Жыл бұрын
what you really need is a nested-type default slider. nesting sliders or somewhat similar will be absolutely neccessary to handle future complexities before they are simplified
@bryan98pa Жыл бұрын
So interesting video. The way we can control the weight and that's affect the final results is something that I learned today👍
@OlivioSarikas Жыл бұрын
Thank you :)
@RetzyWilliams Жыл бұрын
Wow, such a great idea. Awesome! 👏
@Aristocle Жыл бұрын
If I want to fill an empty room with furniture using this technique, but leaving the position of the fixtures and walls unchanged, how should I set up the problem? If I want you to suggest the random arrangement of the supplies (brainstorming).
@dreamzdziner8484 Жыл бұрын
I knew those sliders in Controlnet could do wonders :-) Gr8 video mate 👌
@temarket Жыл бұрын
hey man, your channel rocks!
@paulstrife2 ай бұрын
Was there ever a technique to have the exact same scene from multiple angles? It is important for my needs to show the same objects. :(
@nonameishere723411 ай бұрын
Thanks for sharing the tip. You're awesome ;)
@Moedow Жыл бұрын
How does the guidance parameter defines how long it’s gonna be used when the guidance start parameters does that already?
@niteshghuge Жыл бұрын
Hey Olivio, I am trying to install the AUTOMATIC1111 but issue is coming my laptop is getting hanged please tell me what is the system requirement to install the A1111 in local machine
@coda514 Жыл бұрын
Great video, informative as always.
@OlivioSarikas Жыл бұрын
Thank you very much :)
@audiogus2651 Жыл бұрын
Heck yah, great video! I have been using game screenshots for this sort of thing too, works great!
@OlivioSarikas Жыл бұрын
Great Idea! :) Thank you
@aronhommer1942 Жыл бұрын
is it also possible to create an output that doesnt looke cartony with this method?
@fluffsquirrel Жыл бұрын
Maybe that's just the model he's using?
@OlivioSarikas Жыл бұрын
yes, of course. I just used anime here, because it is easier to prompt for
@fluffsquirrel Жыл бұрын
@@OlivioSarikas Thank you!
@BlackMita Жыл бұрын
What’s with the annotation result preview? I’ve never seen that in 1111 and I don’t get how it relates to the output
@ryry9780 Жыл бұрын
Not all that different from a project I did before -- taking a picture of a person from the internet and turning it into an anime-style fanart. The only things that the original picture and the final product have in common are the pose of the character and the camera angle. ControlNet Depth and Canny had been very important, along with ControlNet Clip Image (style).
@cekuhnen Жыл бұрын
Oliver does controlNet generate a depth map ? I was under the impression that you need to supply it with it.
@animestories5084 Жыл бұрын
Any way we can turn the image in different (like fill in the blank) angles and get consistency so it can be used for 3D scenes? For example: You take an image and keep it as a texture on the model, then move the angle a bit so that the depth map can read it a different way, but now (or still) you're still having an imgtoimg feature involved, which can be tested to stay texturally consistent. Of course, the UV maps will update, but you have now textures that can be used for 3D animation. Please reply to let me know lol.
@entrypoint2009 Жыл бұрын
Using the Guest mode gives very good result.
@OlivioSarikas Жыл бұрын
Thank you, i will try that
@pedrodeelizalde7812 Жыл бұрын
Hi, i have controlnet but my settings are different, for start there is no annotator result next to the image, also the prepocressor options there is three options : depht leres, depht midas and depht zoe, then below options are Control Weight, starting control step, ending control step, preprocessor resolution, remove near %, remove background %. Anyone know why my setings are not the same as him?
@kikeluzi Жыл бұрын
Same here... And also... I think it's not working :c
@pedrodeelizalde7812 Жыл бұрын
@@kikeluzi It did work for me using depth leres. But i had to play with settings to get it...
@kikeluzi Жыл бұрын
@@pedrodeelizalde7812, I'll try this one then. Thank you!!! 😁I was using "depth_midas" * edit: I just needed to download a model first... ;u;
@FikaBakilli Жыл бұрын
Hello! As always, everything is top notch. For which you have a lot of respect. In the video you mentioned that the resulting images can be converted into 3D. Could you show us how all these pictures could be converted back into 3-D model. )))
@OlivioSarikas Жыл бұрын
Hi, thank you. I think you misunderstood me. I don't know how to turn them back into 3D. It can be done to a certain degree, by modeling the same scene, but only for a kind of zoom effect, not an actual 3D space, as far as i know
@CHACHILLIE Жыл бұрын
You can camera project them back onto the 3D model
@user-jk9zr3sc5h Жыл бұрын
I always wondered what those sliders would do-
@petzme8910 Жыл бұрын
Can I add 1girl in the prompt? If I want the girl stand in the middle of the street 😊
@OlivioSarikas Жыл бұрын
yes, you can also use multi-controlnet to pose her exactly in a specific position
@digitaltutorials1 Жыл бұрын
It's interesting because control net pulls depth out of 2D but with SD in blender the depth is 100% calculated so it's more accurate to use the SD plugin (which I haven't tested yet but I assume it is underdeveloped compared to auto's).
@cobr3545 Жыл бұрын
Control Net uses a rendered depth map, same as blender. It's not estimated from 2D if you supply one.
@cobr3545 Жыл бұрын
Control Net uses a rendered depth map, same as blender. It's not estimated from 2D if you supply one.
@JulienTaillez Жыл бұрын
what a brilliant idea !
@OlivioSarikas Жыл бұрын
Thank you :)
@RiyadJaamour Жыл бұрын
Hi Olivio is there still a way to use stable diffusion with all changes and additions in google colab? or even if there is an alternative to google colab other than local installation would be great! ty
@elarcadenoah9000 Жыл бұрын
do u have a video with stable diffusion xl inseting text?
@OlivioSarikas Жыл бұрын
Nope, not yet. I could make one
@elarcadenoah9000 Жыл бұрын
@@OlivioSarikas cool bro you are the chosen one
@sb6934 Жыл бұрын
Thanks!
@tarekramadan1867 Жыл бұрын
I have used kinda the same method but for interior architecture scene with images i generated i will send it
@facex7x Жыл бұрын
hey i dont have "depth" under model for control net. how did you get that?
@OlivioSarikas Жыл бұрын
maybe your controlnet version is outdated or you didn't download the depth model
@The-Inner-Self Жыл бұрын
Have you experimented with video using the 3d models to walk through the scene and then plug into depth control map?
@stillfangirlingtoday1468 Жыл бұрын
This is probably a stupid question, but do you think it's possible for the AI to generate the same image but will different lighting? It would be awesome.
@OlivioSarikas Жыл бұрын
try rendering the scene first, then using canny as the controlmap with that image input and change the daylight description. See if that helps
@stillfangirlingtoday1468 Жыл бұрын
@@OlivioSarikas Oh, I will definitely try it! Thank you for replying!
@NasserQahtani Жыл бұрын
كم انت جميل في طرحك
@sirmalof3255 Жыл бұрын
Where are you from Olivio. I can just listen to your accent for hours and hours. It is so cute :)
@OlivioSarikas Жыл бұрын
Thank you. I'm from Vienna. Well, i'm from Germany, but i live in Vienna :)
@sakifishmam1436 Жыл бұрын
It's now banned anyone know any alternatives?
@kikeluzi Жыл бұрын
I didn't find anything about that. ControlNET was banned? 🤔 how do you know and where to find?
@michail_777 Жыл бұрын
Hi Olivio. I don't remember the name, but Stable Diffusion has an Expansion that bends the image and creates a corridor. You can create streets and whatever you want.But I have another question. If you know how to achieve stability when you create animation with ControlNet script (img2img) or when you process batch at once. Just the face can already be stabilized, but the clothes, they are always different. It's kind of similar, but not. If you know how and what to do.Show us, please.
@smortonmedia Жыл бұрын
I wish MidJourney could do something like this... generate depth maps or use a depth map as perspective control
@audiogus2651 Жыл бұрын
It used to come up quite bit in the weekly chats a few months ago and they did say it was being looked into.
@OlivioSarikas Жыл бұрын
Yes, MJ really needs stuff like that ASAP
@Rasukix Жыл бұрын
also worth mentioning you used a different seed for each render
@Hazzel31337 Жыл бұрын
meta has a new ai called segment anything, goes nicely with ai generated images to cut out elements, maybe worth content for you
@OlivioSarikas Жыл бұрын
I will have a look at that. thank you
@LouisGedo Жыл бұрын
👋
@HarryJPotter Жыл бұрын
cleaver girl....
@User-eg4ws5 ай бұрын
so you basically stole an art for a sample? Aight
@dezenho Жыл бұрын
do you lost your time with ai now ....life time not worth ....you are old ....control it ....you are not happy with this.... pay atencion what your life is now.....
@Thagnoth Жыл бұрын
Yes, instructor… I shall abandon all my enjoyment of technology… Thank you for showing me your text-only Amish hypnosis technique……