To me, the critical and effective way of educating and enlightening is the step-by-step reasoning coupled with powerful animations. This video has certainly achieved that. Thanks so much!
@Deepia-ls2fo5 ай бұрын
Thank you for your comment !
@mevinkoser844613 күн бұрын
I have to argue that there is a fundamental difference between educating and enlightening.
@CELLPERSPECTIVE6 ай бұрын
Legendary algorithm pull. I love educational content like this. Road to 1M!
@Deepia-ls2fo6 ай бұрын
Thanks :)
@yadavadvait4 ай бұрын
this channel is a hidden gem!
@Deepia-ls2fo4 ай бұрын
Thank you !
@adamskrodzki61522 ай бұрын
Amazing how high quality your videos are. Hope you will have much more subscribers soon enough. This quality definetly deserve that.
@Marauder136 ай бұрын
Awesome video and animations bro. Its so amazing!! Keep doing more videos, I'll stay tuned!
@Deepia-ls2fo6 ай бұрын
Thank you, I'm not planning on stopping yet :)
@pritamswarnakar6855Ай бұрын
I am sharing this and will endorse people in my contact to subscribe, as exaplaining this "not so common" topics, with this much ease is really a art and this efforts of yours deserves a great amount of respect and appreciation.
@bromanned70696 ай бұрын
This channel is so underrated. Amazing explanation!
@Deepia-ls2fo6 ай бұрын
Thanks :)
@HenrikVendelbo3 ай бұрын
Excellent pace and choice of words. A video on UNET would be great
@mostafasayahkarajy5084 ай бұрын
Thank you very much for your videos. I am waiting for the next one about the VAE.
@Deepia-ls2fo4 ай бұрын
Thanks, hope I can post it this month :)
@MuhammadIsamil2 ай бұрын
Perfect tone and pace. Thanks.
@Deepia-ls2fo2 ай бұрын
Thanks !
@gabberwhacky4 ай бұрын
Clear and concise explanations, awesome!
@Deepia-ls2fo4 ай бұрын
@@gabberwhacky Thanks
@alexmattyouАй бұрын
I can visualize autoencoders better now. Keep doing animations. My brain just encodes animation data easily And I need to decode them in exam paper / seminar
@thmcass80275 ай бұрын
Thanks for making such an intuitive and insightful video! Cant wait for more content from this channel!
@Deepia-ls2fo5 ай бұрын
@@thmcass8027 Thanks !
@farzinnasiri1084Ай бұрын
this explanation was exactly what I needed... I was having a hard time understanding the concept
@Tothefutureand4 ай бұрын
Very good and easy yo understand content, i love when channels like yours make hard concept that easy to understand.
@Deepia-ls2fo4 ай бұрын
Thank you !
@gama31814 ай бұрын
i just found your channel and fall in love with it. thank you !
@Deepia-ls2fo4 ай бұрын
Thanks for the kind words !
@CodeSlate6 ай бұрын
Great content, hope you can get more exposure!
@Deepia-ls2fo6 ай бұрын
Thanks :)
@minhazulislam304814 күн бұрын
Wish Goodluck for this new channel
@SheikhEddy4 ай бұрын
What would you do if you wanted to find a middle between two points in latent space if simple interpolation produces garbage results?
@Deepia-ls2fo4 ай бұрын
Thanks for the comment, in fact taking a simple interpolation is perfectly fine when your latent space is "in order". It should have some properties like being somewhat continuous, which is not imposed by a simple autoencoder. However VAEs do have such a latent space.
@CuriousK74 ай бұрын
Awesome content.❤ The reasoning and intricate animation are mindblowing. Eagerly waiting for VAE video 😊
@Deepia-ls2fo4 ай бұрын
Thanks !
@MutexLock21 күн бұрын
Awesome video! Thank you for your perfect explanation!!
@dontdiediy76306 ай бұрын
Good job man! Nice graphical representations. Easy to follow.
@Deepia-ls2fo6 ай бұрын
Thank you so much !
@ArashNasr6 ай бұрын
This video is both informative and visually appealing. Thanks!
@Deepia-ls2fo6 ай бұрын
Many thanks :)
@YarinM6 ай бұрын
Nice explanation but I think two key aspects are missing (maybe planned to show up in later videos): 1. the connection to transformerts. 2. the fact that latent space allows you to make two models speek the same language (like the idea of CLIP and how its used in DallE)
@Deepia-ls2fo6 ай бұрын
Hi, thank you for the feedbacks ! Indeed these aspects are very important in modern architectures, but I feel like I would need to introduce a lot of other concepts to get there. It's definitely something I'll treat in future videos.
@beowolxАй бұрын
wow, that was such a good video! Thanks for that
@ananthakrishnank32086 ай бұрын
Thank you! You made it lucid.
@Deepia-ls2fo6 ай бұрын
Thank you for your comment !
@Canbay124 ай бұрын
incredibly good content.keep up the good work!
@Deepia-ls2fo4 ай бұрын
Thank you !
@aryankashyap71945 ай бұрын
Great video! Waiting for the one on VAEs and other topics
@Deepia-ls2fo5 ай бұрын
@@aryankashyap7194 Thanks, it will probably be up before the end of the summer :)
@suryavaraprasadalla851117 күн бұрын
Audio has latency around 4:09 with video!
@MutigerBriefkasten6 ай бұрын
Perfect animated and well explained. Thank you 👍 subscribed 😊
@Deepia-ls2fo6 ай бұрын
Thank you !
@rishidixit7939Ай бұрын
For the applications like Domain Adaptation and Image Colorization how does the loss function look like for an AutoEncoder ? Also you said that the MSE Loss is used but then in that case a trivial solution exists where the image is copied pixel by pixel and the Network Learns Nothing. How is that problem taken care of ?
@Deepia-ls2foАй бұрын
@@rishidixit7939 Hi, I'm not familiar with those two tasks, but for Image Colorization an MSE would probably do just fine ? For preventing the Network to simply copy the image pixel by pixel, we have the bottleneck layer! Remember that this layer has a lot fewer neurons than there are pixels, so you can't just "copy" the values :)
@DavidW.-is3wb6 ай бұрын
Could you make a video on common dimensionality reduction methods like PCA and projection (linear discrimants) etc? I’ve always been interested in when they should be applied but not the other. Anyways, nice video very underrated! Deserves more exposure! T^T
@Deepia-ls2fo6 ай бұрын
Thank you ! Yep that's the plan for the very next video: it will be an explanation of how several visualization methods work, there will probably be PCA, t-SNE and UMAP
@ShadArfMohammed5 ай бұрын
Thanks for this wonderful content.
@Deepia-ls2fo5 ай бұрын
Thank you !
@samson67076 ай бұрын
great video. knew about encoders from the transformer model where the optimization criterion for embedding is the output of the decoder for the classification/generation task measured by eg. cross entropy loss and i know about word2vec where the optimization criterion is dot product similarity of co-occuring words. i did not know that in autoencoders the optimization criterion is minimizing the loss over reconstructing the original input. nice.
@Deepia-ls2fo6 ай бұрын
Thanks a lot !
@GabrieleBandino5 ай бұрын
Great video! Are you planning on releasing the code used for it?
@Deepia-ls2fo5 ай бұрын
Thank you ! Yes, I'll make a github page for the channel, I'll put the link in the description when it's done.
@sharjeel_mazhar4 ай бұрын
Can you make a video on RNN and its variants?
@Deepia-ls2fo4 ай бұрын
Hi Sharjeel thanks for your comment ! RNN and other auto-regressive models are definitely on my to-do list. :)
@samson67076 ай бұрын
8:02 Principal Component Analysis? 😉
@andywub6 ай бұрын
or tsne/umap
@EigenAАй бұрын
Great video
@stormaref4 ай бұрын
Nice video, keep it up
@Deepia-ls2fo4 ай бұрын
@@stormaref Thanks !
@Aften_ved2 ай бұрын
4:10 Latent Space.
@StreetPhilosophyTV6 ай бұрын
Great work!
@Deepia-ls2fo6 ай бұрын
Thank you !
@ashkankarimi41466 ай бұрын
Please create more videos!
@Deepia-ls2fo6 ай бұрын
Sure will do aha
@carsx78246 ай бұрын
Do you use ai voiceover? Great video btw
@Deepia-ls2fo6 ай бұрын
Thank you ! Indeed the voiceover is generated by an AI, but it is my own voice that I cloned. I'm using Elevenlabs. Did that annoy you or got you out of the video ? :(
@chainetravail2439Күн бұрын
DeepAI was already taken as channel name, so you went for a "franglais" solution instead?
@Deepia-ls2foКүн бұрын
No "Deepia" was actually already taken too, I just found it sounded better than "DeepAI" ! You can kinda pronounce it in a single word which is easier imo