@@Moaaz-SWE you think someone would enter a video about Variational Autoencoders if he doesn't know what Autoencoders are
@Moaaz-SWE3 жыл бұрын
@@selmanemohamed5146 yeah i did... 😂😂😂 and i was lucky he explained both 😎🙌😅 + the difference between them and that's the important part
@Moaaz-SWE3 жыл бұрын
@Otis Rohan Interested
@atticusmreynard6 жыл бұрын
This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.
@vindieu Жыл бұрын
Except for "Gaussian" that is weirdly russian pronunced "khaussian" wat?
@arkaung6 жыл бұрын
This guy does a real job of explaining things rather than hyping up things like "some other people".
@malharjajoo73935 жыл бұрын
are you referring to Siraj Raval? lol
@mubangansofu74692 жыл бұрын
@@malharjajoo7393 lol
@adityakapoor32372 жыл бұрын
This guy was a VAE to the VAE explanation. Really need more of such explanations with the growing literature! Thanks!
@ambujmittal68244 жыл бұрын
Your way of simplifying things is truly amazing! We really need more people like you!
@MonaJalal4 жыл бұрын
hands down this was the best autoencoder and variational autoencoder tutorial I found on Web.
@jingwangphysics2 жыл бұрын
The beta-VAE seems enforcing a sparse representation. It magically picks the mostly relevant latent variables. I am glad that you mentioned ‘causal’, because that’s probably how our brain deals with high dimensional data. When resources are limited (corresponding to use large beta), the best representation turns out to be a causal model. Fascinating! Thanks
@JakubArnold6 жыл бұрын
Great explanation on why we actually need the reparameterization trick. Everyone just skims over that and explains the part that mu+var*N(0,1) = N(mu,var), but ignores the part why you need it. Good job!
@SamWestby3 жыл бұрын
Three years later and this still the best VAE video I've seen. Thanks Xander!
@akshayshrivastava974 жыл бұрын
Finally, someone who cares their viewers actually get to understand VAEs.
@liyiyuan454 жыл бұрын
This is sooooooo useful for 2am and you dragged by all the math in the actually paper. Thanks man for the clear explanation!
@abhinavshaw91126 жыл бұрын
Hi, I am a Graduate Student at UMass Amherst. I really liked your video, it gave me a lot of ideas. Watching this before reading the paper would really help. Please keep it coming I'll be waiting for more.
@moozzzmann11 ай бұрын
Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work
@debajyotisg5 жыл бұрын
I love your channel. A perfect amount of technicality so as to not scare off beginners, and also keep the intermediates/ experts around. Brilliant.
@fktudiablo95794 жыл бұрын
always the best place to have a good overview before diving deeper
@isaiasprestes6 жыл бұрын
Great! No BS, strait and plain English! That`s what I want!! :) Congratulations!
@agatinogiulianomirabella65903 жыл бұрын
Best explanation found on the internet so far. Congratulations!
@Zahlenteufel12 жыл бұрын
Bro this was insanely helpful! I'm writing my thesis and am missing a lot of the basics in a lot of relevant areas. Great summary!
@paradoxicallyexcellent51385 жыл бұрын
I was very interested in this topic, read the paper, watched some videos, read some blogs. This is by far the best explanation I've come across. You add a lot of value here to the original rapper's contribution. It could even be said you auto-encoded it for my consumption ;)
@Maximfromparapet6 жыл бұрын
This is a LIT channel for watching alongside papers. Thanks
@rylaczero37406 жыл бұрын
Bloody nicely explained than the Stanford people. Subscribed to the channel, I remember watching your first video on Alpha, but didn't subscribed then, I hope there will be more content on channel with same level of quality, otherwise its hard for people to stick around when the reward is sparse.
@kiwianaDJ6 жыл бұрын
what a gem of a channel I have found her...
@obadajabassini35526 жыл бұрын
A really great talk! I have been reading about VAE a lot and this video helps me to understand it even better. Thanks!
@rileyrfitzpatrick5 жыл бұрын
Im always intimidated when he says it is going to be technical, but then he explains it so concisely.
@ejkitchen6 жыл бұрын
Your videos are quite good. I am sure you will get an audience in no time if you continue. Thank you so much for making these videos. I like the style you use a lot and love the time format (not too short and long enough to do a good overview dive). Well done.
@ArxivInsights6 жыл бұрын
Thank you very much for supporting me man! New video is in the making, I expect to upload it hopefully somewhere next week :)
@TheJysN3 жыл бұрын
I hand such a hart time understanding the Reparameterization trick, now i finally got it. Thanks for the great explanation. Would love to see more Videos from you.
@aryanirvaan3 жыл бұрын
Dude what a next level genius you are! You made them so easy to be understood, and just look at the quality of the content. Damn bro!🎀
@515nathaniel4 жыл бұрын
"You cannot push gradients through a sampling node" TensorFlow: *HOLD MY BEER!*
@falsiofalsissimo53136 жыл бұрын
We needed a serious and technical channel about latest findings in DL. That siraj crap is useless. Keep going! Awesome
@sunnybeta_6 жыл бұрын
This video suddenly popped up today morning on my home page. Now i know my Sunday will be great. :D
@antonalexandrov41592 жыл бұрын
Just found your channel and I realize how with some passion and effort you explain things better than some of my professors. Of course, you don't go into too much detail but putting together the big picture comprehensively is valuable and not everyone can do it.
@shivamutreja64272 жыл бұрын
Your videos are absolute crackin for a quick revision before an interview!
@famouspeople34993 жыл бұрын
Great video, better than many tutor lessons in university, this animation and simplified the things with simple words
@abylayamanbayev84032 жыл бұрын
Finally I understood the intuition of sampling from mu and sigma and reparameterization trick. Thanks!
@bradknox4 жыл бұрын
Great video! I have a minor correction: At 6:14, calling the cursive L a "loss" might be a misnomer, since loss is something we almost always want to minimize, and the formula of (reconstruction likelihood - KL divergence) should be maximized. In fact, the Kingma and Welling paper call that term the "(variational) lower bound on the marginal likelihood of datapoint i", not a loss.
@robinranabhat31256 жыл бұрын
Don't you ever stop explaining papers like this. Better than Siraj's video. Just explain the code part a bit longer. And your channel is set.
@pablonapan46986 жыл бұрын
exactly. show some more code please.
@shrangisoni87585 жыл бұрын
Yea we can't really do much until we code and see results ourselves.
@pixel70385 жыл бұрын
Siraj has improved his videos and provides more content. Don’t be stuck in the past ;)
@gagegolish93065 жыл бұрын
@@shrangisoni8758 He's explained the fundamental concepts, you can take those concepts and translate them to code. He shouldn't have to do that for you.
@dalchemistt75 жыл бұрын
@@pixel7038 Please stop spreading his name. He has faked his way more than enough already. Read more here: twitter.com/AndrewM_Webb/status/1183150368945049605 and here www.reddit.com/r/learnmachinelearning/comments/dheo88/siraj_raval_admits_to_the_plagiarism_claims/ And what really bugs me is not the plagiarism- that's bad and shameful in itself- but the level of stupidity this guys had shown while plagiarizing- "gates" to "doors" and "complex Hilbert space" to "complicated Hilbert space".
@AjithKumar-gk7bf5 жыл бұрын
Just found this channel ... today... one word Brilliant...!!!
@mshonleАй бұрын
I’ve come from the future of 2024 to say this is a great, comprehensive video!
@ujjalkrdutta78546 жыл бұрын
Really liked it. Firstly giving an intuition of the concept, its application and then to the objective function while explaining its individual terms, in a way everyone can understand, it was simply professional and elegant. Nice work and thanks!
@giorgiozannini56263 жыл бұрын
wait how did I not know of this channel. Beautiful explanation, perfectly clear. Thanks for the awesome work!
@nabeelyoosuf5 жыл бұрын
Your explanations are quite insightful and flawless. You are are a gifted explainer! Thanks for sharing them. Please keep sharing more.
@basedgod80976 жыл бұрын
Dang, I actually understand this stuff lol. I think Im gonna binge watch all your videos once my exams finish. Thanks man :)
@ArxivInsights6 жыл бұрын
based god That's the goal, making hardcore ML stuff accessible! You're very welcome :p Good luck with your exams ;)
@lisbeth045 жыл бұрын
I love you. I spent so long on this and couldn't understand the intuition behind it, with this video I understood immediately. Thanks
@DistortedV125 жыл бұрын
This was very lucid. You are gifted at explaining things!
@from-chimp-to-champ12 жыл бұрын
You help so much with my exams, thanks man, subscribed for more high quality stuff!
@kalehermit5 жыл бұрын
Thank you very much, this is the first time I understand the benefit of reparameterization trick.
@nohandlepleasethanks6 жыл бұрын
Great explanations. This filled two crucial gaps in my understanding of VAEs, and introduced me to beta-VAEs.
@dimitryversteele24106 жыл бұрын
Great video ! Very clear and understandable explanaitions of hard to understand topics.
@RubenNuredini5 жыл бұрын
@1:21 "...with two principal components: ... Encoder... Decoder..." I know that you did it without bad intention but using this terminology may lead to confusion. PCA (Principal Component Analysis) is also used for dimensionality reduction and often compared to autoencoders. In PCA world the term "principal components" has really significant meaning. By the way, great video and keep up with the outstanding work!!!
@dippatel17396 жыл бұрын
your videos are awesome don't lose track bcuz of subscribers.
@double_j38676 жыл бұрын
Subscribed. Very useful -- i'm an applied ML researcher (applying these techniques to real-world problems) so I need a way to quickly "scan" methods and determine what may be useful before diving in-depth. These styles of videos are exactly what I need.
@reinerwilhelms-tricarico3445 жыл бұрын
Great! Crisply clear explanations in such a short time.
@iwanttobreakfree7018 ай бұрын
6 years ago and I now use this video as a guidance to understanding StableDiffusion
@commenterdek32418 ай бұрын
an you help me out as well? I have so many questions but no one to answer them.
@ashokkannan935 жыл бұрын
Excellent video!! Probably the best VAE video I saw. Thanks a lot :)
@ativjoshi10496 жыл бұрын
Your explanation is crisp and to the point. Thanks.
@achakraborti6 жыл бұрын
First video I see from this channel. Immediately subscribed!
@satishbanka3 жыл бұрын
Very good explanation of Variational Autoencoders! Kudos!
@RANJEETTHAKUR19836 жыл бұрын
Impressive description. Great content...... Hey, Siraj, someone is here :P
@vortexZXR4 жыл бұрын
So many ideas come to mind after watching this video. Well done!
@MeauxTarabein6 жыл бұрын
Very Helpfully Arxiv! keep the good Quality videos coming
@hcgaron6 жыл бұрын
I discovered your channel today and I'm hooked! Excellent work. Thank you so much for your hard work
@leonliang91853 жыл бұрын
I rarely like videos on youtube but this video is so freaking good for beginners like me!
@adityamalte4766 жыл бұрын
Really appreciate your effort of simplifying research papers for viewers.Keep it up.I want more such videos
@ChocolateMilkCultLeader2 жыл бұрын
Shared your work with my followers. Keep making amazing content
@ashokkannan936 жыл бұрын
I would like to see more videos from you. Clear explanation of concept and gentle presentation of math. Great job!
@davidm.johnston89946 жыл бұрын
Great videos man, keep them going, you're gonna find an audience!
@LeNudel4 жыл бұрын
Thanks for your explanation. My brain was broken after reading the Paper :)
@sethagastya5 жыл бұрын
This was an amazing video! Thanks man. Will stay tuned for more!
@elreoldewage66746 жыл бұрын
This is really great! Thanks for sharing. I think it would be very informative if you linked to a few of the papers related to the concepts in the video (for those who want to slough through dry text after being sufficiently intrigued by the video).
@ArxivInsights6 жыл бұрын
Elre Oldewage Really good point, I'll add the links tonight!
@elreoldewage66746 жыл бұрын
Thanks :)
@antoinesueur92896 жыл бұрын
Great content ! The format and delivery is perfect, hope to see more of these videos :) . Are you planning on doing a video on Capsule Networks in the future ?
@ArxivInsights6 жыл бұрын
More videos are definitely coming, the next one will be on novel state-of-the-art methods in Reinforcement Learning! I don't plan on making a video on Capsule Nets since there is an amazingly good video by Aurélien Géron on that topic and there's no way I can explain it any better than he did, no need to reinvent the wheel :p Here is his video: kzbin.info/www/bejne/poGxaZdmephsZpI
@nildiertjimenez74864 жыл бұрын
One minute watching this video is enought to be a new subscriber! Awesome
@snippletrap6 жыл бұрын
Sublime text editor is so aesthetic. Anyway, yes, great point, the input dimensionality needs to be reduced. Even the original Atari DeepMind breakthrough relied on a smaller (handcrafted) representation of the pixel data. With the disentangled variational autoencoder it may be feasible or even an improvement to deal with the full input.
@samyogdhital2 ай бұрын
bro please continue making these videos. we love you and we want you to return and make same kind of video on various research papers on ai, robotics and all. Please reply if you see this comment. :D
@DanielWeikert6 жыл бұрын
Great work. Thanks a lot! Highly appreciate your effort. Creating these videos takes time but I still hope you will continue.
@tamerius16 жыл бұрын
You're explaining this very well! Finally an explanation on an AI technique that's easy to follow and understand. Thank you.
@SeanLGoldberg6 жыл бұрын
Great episode. Came here for a good explanation of VAEs, but was blown away when you dug into Beta-VAEs and the Deepmind RL paper. Have you read the group's newest paper "SCAN" on combining Beta-VAEs with symbol representation and manipulation?
@ArxivInsights6 жыл бұрын
Sean Goldberg Haven't had the time yet, it's somewhere in my 658 open chrome tabs though :p
@1apiano6 жыл бұрын
I liked arxiv.org/pdf/1709.05047.pdf more, but the SCAN paper is also cool. BTW compliments for your channel, it's the only Deep Learning channel which is worth following.
@matthewbascom4 жыл бұрын
I like the subtle distinction you made between the disentangled variational auto-encoder versus the normal variational auto-encoder: Changing the first dimension in the latent space of the disentangled version rotates the face while leaving everything else in the image unchanged. But changing the first dimension in the normal version not only rotates the image, but changes other features as well. Thank you. Me gleaning that distinction from Higgins, et al. Beta-VAE Deepmind paper would be unlikely...
@jeremyuzan11693 жыл бұрын
Excellent Content. I'm using VAE for music generation. Your explainations are very interesting. Thanks again Jeremy Uzan IRCAM Paris
@wy25285 жыл бұрын
mapping the latent vectors is really smart
@mdjamaluddin_ntb5 жыл бұрын
Good explanation. Enough and relevant math that support the explanation which we can understand the insight.
@venkatbuvana5 жыл бұрын
Thanks a lot for sharing such a succinct summarization of VAEs. Very helpful!
@Minitomate5 жыл бұрын
This technology could be really useful for the police to identify suspects. I really liked your video!
@seattle-bfg6 жыл бұрын
Really really awesome channel!!! Look forward to watching more of your videos!
@bernardfinucane20616 жыл бұрын
It would be interesting to apply this to word embeddings. There is the well know example that king-queen = man-woman, (so king-man+woman = queen), but the question that immediately comes up is what are the "real" semantic dimensions. I don't think there is an answer to this in the short term, because of the homonym problem, but it is interesting to think that this kind of network could discover such abstract features.
@xandermay77056 жыл бұрын
Holy crap, another Xander interested in machine learning :D
@kristyleist33184 жыл бұрын
This is great! Keep going, we need you! Don't stop making amazing videos like this
@mrdbourke6 жыл бұрын
Epic video Xander! I learned a lot from your explanation. Now to try an implement some code!
@davidenders9107 Жыл бұрын
Thank you! This was comprehensive and comprehendible.
@maxhorowitz-gelb60926 жыл бұрын
Wow! Great video. Very concise and easy to understand something quite complex.
@ck18472 жыл бұрын
Thanks, this video clarified many things from the original paper.
@TheRohr6 жыл бұрын
Great thanks for the video and the paper explanation! Really, really helpful, keep that paper explanation content!
@user-or7ji5hv8y5 жыл бұрын
Your explanation is so clear.
@niladrishekhardutt6 жыл бұрын
Great tutorial. Please keep adding more content. Good to see that you are focusing on the technicalities unlike Siraj but I still find that a little more Math would be good. Keep it up!
@lingling14114 ай бұрын
i will ace this topic in my exam with this one! thanks, man!
@HeduAI6 жыл бұрын
Amazing explanation to a complicated topic! Thank you so much!!!!
@emanehab5106 жыл бұрын
Don't stop making amazing videos like this
@forgotaboutbreАй бұрын
Thank god I got a Masters in CS. I could be wrong but I do Imagine these topics are much harder to follow without decades of technical education.
@oguretsagressive5 жыл бұрын
6:14 - these equations are tough! Looks like mathematical jargon, as if the guys were saying "ok, we don't have time to explain, but you'll figure out".
@p4k710 ай бұрын
Great video, and the algorithm is finally recognizing it! Come back and produce more videos?
@phattran48585 жыл бұрын
Thank you very much, I was trying to understand it, but it's much easier when I found this video!