As Einstein said “If you can't explain it to a six-year-old, you don't understand it yourself.” Very easy to understand thank you.
@musicbytea93637 ай бұрын
Thank you for such a no math video. It's very rare to find videos with clear explanation for the intuition of the problem. Once we grab the idea, then the math seems more manageable. Thank you so much!
@roshanpatel40373 жыл бұрын
dude your channel is a gold mine, keep up the great work
@DigitalSreeni3 жыл бұрын
Glad you enjoy it!
@matthewmiller36534 жыл бұрын
Incredible. Can't wait for future videos. Big fan as always.
@DigitalSreeni4 жыл бұрын
More to come!
@mdyounusahamed6668 Жыл бұрын
I have never heard a better explanation than this.
@MM-op9vl4 жыл бұрын
I'm always watching your video with gratitude. I studied basic image processing like CLAHE and segmentation like U-net. Nowdays, I'm studying RetinaNet that is one of object detection. But some concepts of RetinaNet like anchor boxes and transfer learning difficult to me. So, if you have free time, it would be nice upload a video about this object detection. I always feel grateful you. Thanks
@friendlydroid42 Жыл бұрын
awesome stuff! i like the hesitant pause at “backpropgation” - oh i guess u know it if u r watching this hahaha
@linhbui-vt1yz7 ай бұрын
Thank you for the great intuitive explanation!
@HandokoSupeno Жыл бұрын
How can you explain this topic so elegantly and clearly. Thank you
@veganath4 ай бұрын
Was working my way through MIT Deep Learning Generative models 2024 and was stuck on the introduction of Epsilon for the Loss calculation, your instruction helped clarify many things, however, still trying to get my head around all this.
@devanshsharma5159Ай бұрын
such a great explanation! Thank you so much :)
@onurserce4 жыл бұрын
Thank you! Looking forward to the application!
@DigitalSreeni4 жыл бұрын
Any time!
@chenmingyang32713 жыл бұрын
Thank you for your great video! I saw a lot of notes which introduced too much about mathematical parts but ignored to tell why and how we need to use VAE. Your video helps me to understand why we need to learn a desirable distribution of the latent vector.
@DigitalSreeni3 жыл бұрын
You're very welcome!
@zilaleizaldin18347 ай бұрын
Awsome! Really very useful explanation!
@samarafroz98524 жыл бұрын
Ohh finally I'm so glad you made tutorial on AAE please cover all aspects of AAE in image processing. Thanks so much you're best youtuber for image processing and deep learning. I'm your biggest fan just small request please take a bit more time in explaining code as I'm a biologist but interested in deep learning and image analysis. Thanks once again.
@samarafroz98524 жыл бұрын
Please make tutorial on image generation with AAE as well. 😊
@DigitalSreeni4 жыл бұрын
Well, the next video covers image generation where we generate MNOST images. AAEs are slow so to generate real images you will need lot more resources but the concepts I cover should definitely help.
@yashpisat92672 жыл бұрын
@@DigitalSreeni Can we use VAE to augment Spectrum data?
@RENJIS-on7cp Жыл бұрын
Very informative video Sir. Thank you very much.
@sivanschwartz38132 жыл бұрын
Thank you for the great intuitive explanation! was looking for a video of this kind!
@michaelmoore75685 ай бұрын
How do I learn what number to set the artificial neurons in each layer? i'm super confused.
@AmusedAtom-hh4pt6 ай бұрын
GOLD, nice explanation
@zakariaabderrahmanesadelao30482 жыл бұрын
Thanks
@DigitalSreeni2 жыл бұрын
Thank you very much, please keep learning.
@gorgolyt3 жыл бұрын
This is all great, I think my one quibble is that you are perhaps using a slightly nonstandard definition of "generative". Usually it means that we are modelling the distribution of the input space, and can therefore sample ("generate") new realistic inputs. For exactly the reasons you state, standard autoencoders don't do this, and therefore by definition are not generative models. Yes they can "generate" things but those things don't represent the input space and will probably be a "meaningless" mess. Whereas with variational autoencoders, they do model the input space and can therefore generate "realistic" inputs, so they are generative models.
@andreabettani14572 жыл бұрын
Amazing video, thank you
@KUMAR-ne2mb3 жыл бұрын
What is this cluster x and y axis??
@salahaldeen1751 Жыл бұрын
Great explination! thank you so much
@mrinalde10 ай бұрын
Thank you. Very nicely explained. Go packers :)
@gurwindergill67322 жыл бұрын
Totel types of autoencoder?
@minhakhan34967 ай бұрын
Omg so helpful, thank you.
@momfadda2 жыл бұрын
REALY GREAT VIDEO IS THERE CODE FOR OTHER DATASET LIKE MRI IMAGES ?
@cptechno2 жыл бұрын
QUESTION CONCERNING VAE! Using VAE with images, we currently start by compressing an image into the latent space and reconstructing from the latent space. QUESTION: What if we start with the photo of adult human, say a man or woman 25 years old (young adult) and we rebuild to an image of the same person but at a younger age, say man/woman at 14 years old (mid-teen). Do you see where I'm going with this? Can we create a VAE to make the face younger from 25 years (young adult) to 14 years (mid-teen)? In more general term, can VAE be used with non-identity function?
@sepeslurdes19182 жыл бұрын
For what you are proposing you can even use 'standard' autoencoders.. Check this video kzbin.info/www/bejne/b6uupoysn6t5iZo