No video

Practical Introduction for TyDiffusion

  Рет қаралды 5,030

Matt Hallett Visual

Matt Hallett Visual

Күн бұрын

TyDiffusion is an implementation of Stable Diffusion in 3ds Max. In this video I'll show you theory and help you understand how Stable Diffusion works in a practical, every day sense with real world examples.
Links from the Video #
docs.tyflow.co...
Contact Links #
Website: hallettvisual.com/
Website AI for Architecture: www.hallett-ai...
Instagram: / hallettvisual
Facebook: / hallettvisual
Linkedin: / matthew-hallett-041a3881

Пікірлер: 17
@LudvikKoutnyArt
@LudvikKoutnyArt Ай бұрын
I believe the technical term for an AI enthusiast is a "proompter" :)
@AB-wf8ek
@AB-wf8ek Ай бұрын
Not true. Although language is an integral part, with complex node based processes, it's only a fraction of it.
@ramdpshah
@ramdpshah Ай бұрын
Thanks for tutorial🎉🎉
@USEFization
@USEFization Ай бұрын
you are just a genius. Amazing the job you are doing !
@matthallettai
@matthallettai Ай бұрын
Ah thanks man! Thats so kind of you.
@YansRiegel
@YansRiegel Ай бұрын
Thanks! Great one
@matthallettai
@matthallettai Ай бұрын
Glad you liked it!
@R1PPA-C
@R1PPA-C 17 күн бұрын
Have you worked with the animation side of things yet ? I'm struggling to get the animations to come out like the single images are...the results aren't wildly diffeent but almost like it's using a different model... Also how do you have it setup so that you can see the image as it's generating? mine just goes through the whole process then outputs the final image, I mainly want to see what's happening as the anim is processing as currently I have to wait for the whole sequence to be finalised before I see what the result will look like, thanks :)
@matthallettai
@matthallettai 16 күн бұрын
You're always going to have that weird morphing effect with frame by frame SD animation. No matter what tricks you try there no frame is 100% the same as the last. At least currently. I'm sureone out there is working on it. Ai Video you see now is made with video trained models. What we need is a hybrid or controlnet designed for frame by frame denoising img to img. The current tech is animatediff, deform. - see example on this channel. Personally I like SVD but that has little control.
@R1PPA-C
@R1PPA-C 16 күн бұрын
@@matthallettai well the issue I'm having is not the difference in frames but the initial outcome is completely different when doing a single frame with the same settings as when I hit animation. I said not wildly different but sometimes they are... I train a model to be something which I want for each frame but when I go to animate it's like I've used completely different prompts.. I'm lost
@matthallett4126
@matthallett4126 16 күн бұрын
​@R1PPA-C Depending on the complexity of your scene the more interpolation the AI does with what it "sees" the examples you've seen of other animations look smooth because of their simplicity in size and materials. Leaves and grass for example with change dramatically between frames no matter what you do. Small details change so much it's not worth it. Trust me it's not you.
@omer133
@omer133 Ай бұрын
Thank you for the video. What stable diffusion models can you recommend, specifically for interior design and architecture separately?
@matthallettai
@matthallettai 16 күн бұрын
Don't bother with any model that claims it's good for it, interiors or architecture. Unless it's a Lora addon to experiment with adding certain looks. My favored checkpoints right now are AlbedoXl 2.1 for exteriors. NightVision. EpicPhotogasm. Real Vision XL some others and spelling is off...I'm away from my PC. Best to download popular XL models that are for photorealism. Portrait examples are OK. And compare them with the XYZ plot script at the bottom of A1111 or Forge. Makes a handy grid for you to compare.
@ivanibanez1273
@ivanibanez1273 Ай бұрын
Finally!!
@matthallettai
@matthallettai Ай бұрын
I hope you found it useful.
@jhgil2204
@jhgil2204 Ай бұрын
I want to know the sequence rendering!
@AB-wf8ek
@AB-wf8ek Ай бұрын
In order to get animation with temporal consistency, you'll need to use something like ComfyUI, which is a browser based node editor. Just diffusing over individual frames with a plugin like this will look very flickery.
tyFlow | tyDiffusion - A.I. in 3ds Max
18:30
Jonas Noell
Рет қаралды 28 М.
Why does NOBODY use Unreal Engine for THIS?
8:07
Boundless Entertainment
Рет қаралды 37 М.
My Cheetos🍕PIZZA #cooking #shorts
00:43
BANKII
Рет қаралды 20 МЛН
Stay on your way 🛤️✨
00:34
A4
Рет қаралды 30 МЛН
Каха заблудился в горах
00:57
К-Media
Рет қаралды 10 МЛН
AI Character Acting and Relighting Is Crazy Good
10:51
Theoretically Media
Рет қаралды 71 М.
This 3D Tool Kept 3Ds Max Alive in VFX?
12:29
InspirationTuts
Рет қаралды 7 М.
3D Gaussian Splatting - Explained!
8:28
Creative Tech Digest
Рет қаралды 83 М.
Can You Forge Tungsten?
16:14
Alec Steele
Рет қаралды 745 М.
I Built an INFINITELY ONE-SIDED Violin??
15:39
Xyla Foxlin
Рет қаралды 114 М.
I Spent 100 Days Learning Blender
12:39
Will Gibbons | 3D Rendering
Рет қаралды 161 М.
Accurate Variations using Z-Depth Element and Stable Diffusion
7:51
Matt Hallett Visual
Рет қаралды 3,8 М.
RODIN: AI Image to 3D Model. Is it production ready?
12:32
CGDive (Blender Rigging Tuts)
Рет қаралды 16 М.
My Cheetos🍕PIZZA #cooking #shorts
00:43
BANKII
Рет қаралды 20 МЛН