Accurate Variations using Z-Depth Element and Stable Diffusion

  Рет қаралды 3,482

Matt Hallett Visual

Matt Hallett Visual

4 ай бұрын

Skip the preprocessor and use a perfect Z-depth map from your rendering elements. This method works with any rendering engine, is faster, and provides much more accurate results.
If you need Automatic1111, please check my video and written descriptions here: www.hallett-ai.com/getting-st...
Links from the Video #
Checkpoint Model: civitai.com/models/140737/alb...
Collection of SDXL Controlnets: huggingface.co/lllyasviel/sd_...
Personal Links #
Website: hallettvisual.com/
Website AI for Architecture: www.hallett-ai.com/
Instagram: / hallettvisual
Facebook: / hallettvisual
Linkedin: / matthew-hallett-041a3881

Пікірлер: 24
@studio2513
@studio2513 4 ай бұрын
more in-depth video about your prompts would be awesome. Thank you for sharing!
@matthallettai
@matthallettai 4 ай бұрын
I have so many videos to make and never enough time! Its the editing and retakes that kill me, I'm just not use to talking out loud formally like this. I could make the prompting video less structured and see if that's interesting and gets likes.
@studio2513
@studio2513 4 ай бұрын
@@matthallettai don't bother too much with editing :) Folks learn a lot from webinars from Itoo or Chaos with zero editing
@ammaralammouri1270
@ammaralammouri1270 3 ай бұрын
Thank u, youtube algorithm ❤, great gift from u , i was suffering in zdepth .Thanks for the tips .
@matthallettai
@matthallettai 3 ай бұрын
That's such a wonderful comment. Thank you.
@renwar_G
@renwar_G 4 ай бұрын
Great video as always
@matthallettai
@matthallettai 4 ай бұрын
Appreciate that, cheers.
@everythingeverybody6526
@everythingeverybody6526 4 ай бұрын
Thanks for teaching.
@matthallettai
@matthallettai 4 ай бұрын
Thanks EEB6526! I enjoy it. more vids to follow.
@heartwarrior80
@heartwarrior80 4 ай бұрын
thanks Matt !
@matthallettai
@matthallettai 4 ай бұрын
Thanks for the comment!
@titanoplastik
@titanoplastik 4 ай бұрын
Super video, thank you for sharing your knowledge. I’m trying to get acquainted with comfyui and have already encountered some issues with self-created depth maps. I’ll definitely give it another try now. Regarding the controlnet, there is also the option to upload segmentation and normal map. Have you had any experience with this, and can you share any tips on what to look out for or how to work with it? Thanks again for your videos, I really appreciate them.
@matthallettai
@matthallettai 4 ай бұрын
I think for comfy the trick is still to max out the grey scale range with the Auto Contrast. I've had success with segmentation, but it only works in SD 1.5 since there's no seg model for SDXL. You just skip the preprocessor and make sure you colors are unique for each object. You can even use "Segment Anything" and pick a single color from your Materials or Object element so you can change a single object material in SD. Like change a counter top from granite to marble.
@svsguru2000
@svsguru2000 Ай бұрын
is it possible to feed it an empty room and have it fill it with furniture?
@matthallettai
@matthallettai 25 күн бұрын
Totally. Follow the same steps without any furniture, but raise the "Starting Control Step" to 0.1 and the "End Step" to 0.8. You may have to adjust those, they're just off the top of my head. You'll need to describe the room as a positive prompt very clearly. "Photo of a modern interior, with a sofa in the centre of a large room with a rug, and a window in the background" something like that.
@alainhanni4874
@alainhanni4874 4 ай бұрын
Thank you so much for all you're sharing. I'm trying to grasp this whole new technology... A video with your prompting technique would be awesome. Also, there are so many ways to use AI. What are the pros / cons of A1111, COMFY UI, etc ? Should we learn all of these workflows?
@matthallettai
@matthallettai 4 ай бұрын
I would start with Automatic1111, or Fooocus. I have a video on getting started with those. Fooocus is the easiest to start generating high quality images from text, but you want to use Automatic1111 for editing images like in this example. First thing to do is just play around with one of those two interfaces, (they're not apps, they're called WebUIs and the backbone application is Stable Diffusion). Learn the basics, like how prompts chnage the results, and denoise values when using img2img. Just have fun making images and playing with the settings, then apply it to something practical for your work. ComfyUI is the most powerful, but easy to make mistakes, and is node based, so only switch to that if you either love nodes, or can't achieve something in A1111. Hope that helps!
@alainhanni4874
@alainhanni4874 4 ай бұрын
Thank you, Matt !!
@studio2513
@studio2513 4 ай бұрын
Hey Matt! In exterior stills, how do I hardly preserve my building geo and radically change weather and\or lighting conditions? For instance - turn day rendering to a night one. P.S. your tips are EXTREMELY helpful - thank you!
@matthallettai
@matthallettai 4 ай бұрын
Thats always been a huge challenge. I cover it in detail on my website with the episode "100 Renders" I was going to add it to March's monthly subscription if you're interested. There's no perfect workflow since the more you change from any base rendering, like turning day to night, the greater the odds that a wall will turn into a window, or vice versa. BUT with enough Controlnets turned on, and accepting that you have to discard 4 out of 1 images. It works well enough for inspiration, and client feedback. With exteriors, because of the limits on AI resolutions, we can't generate the same classic client 5K "Final Render" using this method. And thank you for the kind words. If I ever get properly monetized on KZbin I'll put a lot more effort into uploading videos here. There no limit to the AI use cases for us visualizers!
@studio2513
@studio2513 4 ай бұрын
@@matthallettai So then I guess its currently more practical to throw the model into Vantage, add some dusk hdri and in a couple of minutes its ready. and then send the output to SD. sounds logical?
@matthallettai
@matthallettai 4 ай бұрын
@@studio2513 Exactly. You need those blue overtones and warm interiors for the AI to "latch" onto. Unless you have a sketchup model or basic massing thats not even textured, the more work you put into your base rendering, the more SD will deliver the results your expecting. I hope that makes sense. You can connect with me on facebook too, I post a lot on there. Same name, logo.
@atlas_rhea
@atlas_rhea 4 ай бұрын
The exported Z Depth for me has too much visible gradation so I get this swirl effect in the scene. Any help?
@matthallettai
@matthallettai 4 ай бұрын
That does happen, and good question. The first thing you need to do is make sure you dial back the max depth in your render element, so that you get the most grey scale range native in your render. You can even measure the distance from your camera to the furthest object in the scene, like a tree, or a wall, but NOT the sky. The sky will always be full black or white no matter what your max depth. So thats your MAX z-depth., You only need to adjust you MIN when the camera is 5m+ away from the foreground object. 2) Go to [1:00] in the video and review the photoshop settings. Its important you save your Zdepth in 16 or 32 bit file size, that means all this hidden depth is still there when you auto contrast. I little banding is OK, and doesn't always show up in your AI image. 3) You can try lower the weight of the depth controlnet. 4) you can try giving a slight gaussian blur to this zdepth in photoshop or manually blur heavy, non faded lines. 5) skip the render element and use a preprocessor on the beauty pass. Nothing is working at this point, so don't spend to much time on it. There's way more things to experiment with!
Upscale and Enhance with ADDED DETAIL to 4K + (Better than Topaz)
4:50
Matt Hallett Visual
Рет қаралды 10 М.
The day of the sea 🌊 🤣❤️ #demariki
00:22
Demariki
Рет қаралды 86 МЛН
Luck Decides My Future Again 🍀🍀🍀 #katebrush #shorts
00:19
Kate Brush
Рет қаралды 8 МЛН
Wait for the last one! 👀
00:28
Josh Horton
Рет қаралды 108 МЛН
I CAN’T BELIEVE I LOST 😱
00:46
Topper Guild
Рет қаралды 52 МЛН
Creating Realistic Renders from a Sketch Using A.I.
6:57
The Architecture Grind
Рет қаралды 75 М.
AI Sketch to Render Stable Diffusion Architecture FOR FREE
10:09
6 FREE AI ARCHITECTURE RENDERING Tools Compared | Step-by-Step Guide
13:59
Turn 3D Characters Realistic with One Click in Automatic1111
5:59
Matt Hallett Visual
Рет қаралды 2,4 М.
Stable Diffusion ControlNet Canny and Rhino
16:13
Alphonso Peluso
Рет қаралды 2,4 М.
Fooocus Tutorial - Stable Diffusion Made Easy
9:22
Jump Into AI
Рет қаралды 18 М.
There’s a surprise balm in every snack!? #challenge #candy
0:10
We Wear Cute
Рет қаралды 39 МЛН
Think of stray animals 🙏😥
0:37
Ben Meryem
Рет қаралды 41 МЛН
Муравьи и нарисованные линии 🤯
0:24
FATA MORGANA
Рет қаралды 7 МЛН
万万没有想到这事小路飞的便便#海贼王  #路飞
0:14
路飞与唐舞桐
Рет қаралды 8 МЛН
У нас ОТКЛЮЧИЛИ ВОДУ!
0:45
Привет, Я Ника!
Рет қаралды 2,7 МЛН