@@Moaaz-SWE you think someone would enter a video about Variational Autoencoders if he doesn't know what Autoencoders are
@Moaaz-SWE4 жыл бұрын
@@selmanemohamed5146 yeah i did... 😂😂😂 and i was lucky he explained both 😎🙌😅 + the difference between them and that's the important part
@Moaaz-SWE3 жыл бұрын
@Otis Rohan Interested
@atticusmreynard7 жыл бұрын
This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.
@vindieu Жыл бұрын
Except for "Gaussian" that is weirdly russian pronunced "khaussian" wat?
@arkaung7 жыл бұрын
This guy does a real job of explaining things rather than hyping up things like "some other people".
@malharjajoo73935 жыл бұрын
are you referring to Siraj Raval? lol
@mubangansofu74693 жыл бұрын
@@malharjajoo7393 lol
@ambujmittal68244 жыл бұрын
Your way of simplifying things is truly amazing! We really need more people like you!
@jingwangphysics3 жыл бұрын
The beta-VAE seems enforcing a sparse representation. It magically picks the mostly relevant latent variables. I am glad that you mentioned ‘causal’, because that’s probably how our brain deals with high dimensional data. When resources are limited (corresponding to use large beta), the best representation turns out to be a causal model. Fascinating! Thanks
@adityakapoor32372 жыл бұрын
This guy was a VAE to the VAE explanation. Really need more of such explanations with the growing literature! Thanks!
@515nathaniel5 жыл бұрын
"You cannot push gradients through a sampling node" TensorFlow: *HOLD MY BEER!*
@MonaJalal4 жыл бұрын
hands down this was the best autoencoder and variational autoencoder tutorial I found on Web.
@robbertr1558Ай бұрын
Hey man years later and still relevant for my uni course. Amazing conceptual explanation while remaining exact in terminology. Thanks a lot!
@JakubArnold6 жыл бұрын
Great explanation on why we actually need the reparameterization trick. Everyone just skims over that and explains the part that mu+var*N(0,1) = N(mu,var), but ignores the part why you need it. Good job!
@debajyotisg5 жыл бұрын
I love your channel. A perfect amount of technicality so as to not scare off beginners, and also keep the intermediates/ experts around. Brilliant.
@akshayshrivastava974 жыл бұрын
Finally, someone who cares their viewers actually get to understand VAEs.
@moozzzmann Жыл бұрын
Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work
@ejkitchen7 жыл бұрын
Your videos are quite good. I am sure you will get an audience in no time if you continue. Thank you so much for making these videos. I like the style you use a lot and love the time format (not too short and long enough to do a good overview dive). Well done.
@ArxivInsights7 жыл бұрын
Thank you very much for supporting me man! New video is in the making, I expect to upload it hopefully somewhere next week :)
@liyiyuan454 жыл бұрын
This is sooooooo useful for 2am and you dragged by all the math in the actually paper. Thanks man for the clear explanation!
@isaiasprestes6 жыл бұрын
Great! No BS, strait and plain English! That`s what I want!! :) Congratulations!
@SamWestby3 жыл бұрын
Three years later and this still the best VAE video I've seen. Thanks Xander!
@Zahlenteufel12 жыл бұрын
Bro this was insanely helpful! I'm writing my thesis and am missing a lot of the basics in a lot of relevant areas. Great summary!
@agatinogiulianomirabella65903 жыл бұрын
Best explanation found on the internet so far. Congratulations!
@rylaczero37406 жыл бұрын
Bloody nicely explained than the Stanford people. Subscribed to the channel, I remember watching your first video on Alpha, but didn't subscribed then, I hope there will be more content on channel with same level of quality, otherwise its hard for people to stick around when the reward is sparse.
@paradoxicallyexcellent51385 жыл бұрын
I was very interested in this topic, read the paper, watched some videos, read some blogs. This is by far the best explanation I've come across. You add a lot of value here to the original rapper's contribution. It could even be said you auto-encoded it for my consumption ;)
@fktudiablo95794 жыл бұрын
always the best place to have a good overview before diving deeper
@abhinavshaw91126 жыл бұрын
Hi, I am a Graduate Student at UMass Amherst. I really liked your video, it gave me a lot of ideas. Watching this before reading the paper would really help. Please keep it coming I'll be waiting for more.
@life.efficient7 жыл бұрын
This is a LIT channel for watching alongside papers. Thanks
@TheJysN3 жыл бұрын
I hand such a hart time understanding the Reparameterization trick, now i finally got it. Thanks for the great explanation. Would love to see more Videos from you.
@aryanirvaan3 жыл бұрын
Dude what a next level genius you are! You made them so easy to be understood, and just look at the quality of the content. Damn bro!🎀
@ujjalkrdutta78546 жыл бұрын
Really liked it. Firstly giving an intuition of the concept, its application and then to the objective function while explaining its individual terms, in a way everyone can understand, it was simply professional and elegant. Nice work and thanks!
@antonalexandrov41592 жыл бұрын
Just found your channel and I realize how with some passion and effort you explain things better than some of my professors. Of course, you don't go into too much detail but putting together the big picture comprehensively is valuable and not everyone can do it.
@AhladKumar6 жыл бұрын
In this Lecture we will gain insight into the working of Variational autoencoders (VAE). Its difference from simple autoencoders will also be explained. This video is a third part in a mini lecture series on Variational Auto-Encoders which is divided into six lectures. kzbin.info/www/bejne/j3nPlYF5ZriNjM0
@giorgiozannini56263 жыл бұрын
wait how did I not know of this channel. Beautiful explanation, perfectly clear. Thanks for the awesome work!
@rileyrfitzpatrick5 жыл бұрын
Im always intimidated when he says it is going to be technical, but then he explains it so concisely.
@kiwianaDJ6 жыл бұрын
what a gem of a channel I have found her...
@nabeelyoosuf5 жыл бұрын
Your explanations are quite insightful and flawless. You are are a gifted explainer! Thanks for sharing them. Please keep sharing more.
@famouspeople34993 жыл бұрын
Great video, better than many tutor lessons in university, this animation and simplified the things with simple words
@robinranabhat31257 жыл бұрын
Don't you ever stop explaining papers like this. Better than Siraj's video. Just explain the code part a bit longer. And your channel is set.
@pablonapan46986 жыл бұрын
exactly. show some more code please.
@shrangisoni87586 жыл бұрын
Yea we can't really do much until we code and see results ourselves.
@pixel70385 жыл бұрын
Siraj has improved his videos and provides more content. Don’t be stuck in the past ;)
@gagegolish93065 жыл бұрын
@@shrangisoni8758 He's explained the fundamental concepts, you can take those concepts and translate them to code. He shouldn't have to do that for you.
@dalchemistt75 жыл бұрын
@@pixel7038 Please stop spreading his name. He has faked his way more than enough already. Read more here: twitter.com/AndrewM_Webb/status/1183150368945049605 and here www.reddit.com/r/learnmachinelearning/comments/dheo88/siraj_raval_admits_to_the_plagiarism_claims/ And what really bugs me is not the plagiarism- that's bad and shameful in itself- but the level of stupidity this guys had shown while plagiarizing- "gates" to "doors" and "complex Hilbert space" to "complicated Hilbert space".
@tamask0014 жыл бұрын
If you want to dive one level deeper and understand the reparametrization thick, check out the NYU course: kzbin.info/www/bejne/bYPFZaZvrLOCo8U
@shivamutreja64272 жыл бұрын
Your videos are absolute crackin for a quick revision before an interview!
@lisbeth045 жыл бұрын
I love you. I spent so long on this and couldn't understand the intuition behind it, with this video I understood immediately. Thanks
@mshonle3 ай бұрын
I’ve come from the future of 2024 to say this is a great, comprehensive video!
@falsiofalsissimo53136 жыл бұрын
We needed a serious and technical channel about latest findings in DL. That siraj crap is useless. Keep going! Awesome
@abylayamanbayev84032 жыл бұрын
Finally I understood the intuition of sampling from mu and sigma and reparameterization trick. Thanks!
@bradknox4 жыл бұрын
Great video! I have a minor correction: At 6:14, calling the cursive L a "loss" might be a misnomer, since loss is something we almost always want to minimize, and the formula of (reconstruction likelihood - KL divergence) should be maximized. In fact, the Kingma and Welling paper call that term the "(variational) lower bound on the marginal likelihood of datapoint i", not a loss.
@iwanttobreakfree70111 ай бұрын
6 years ago and I now use this video as a guidance to understanding StableDiffusion
@commenterdek324110 ай бұрын
an you help me out as well? I have so many questions but no one to answer them.
@sunnybeta_6 жыл бұрын
This video suddenly popped up today morning on my home page. Now i know my Sunday will be great. :D
@kalehermit5 жыл бұрын
Thank you very much, this is the first time I understand the benefit of reparameterization trick.
@AjithKumar-gk7bf5 жыл бұрын
Just found this channel ... today... one word Brilliant...!!!
@hcgaron6 жыл бұрын
I discovered your channel today and I'm hooked! Excellent work. Thank you so much for your hard work
@achakraborti6 жыл бұрын
First video I see from this channel. Immediately subscribed!
@ashokkannan936 жыл бұрын
Excellent video!! Probably the best VAE video I saw. Thanks a lot :)
@ativjoshi10496 жыл бұрын
Your explanation is crisp and to the point. Thanks.
@reinerwilhelms-tricarico3445 жыл бұрын
Great! Crisply clear explanations in such a short time.
@from-chimp-to-champ12 жыл бұрын
You help so much with my exams, thanks man, subscribed for more high quality stuff!
@DistortedV125 жыл бұрын
This was very lucid. You are gifted at explaining things!
@CodeEmporium7 жыл бұрын
Very well made. You use a GoPro Hero 5 for recording video, correct? What mic do you use for the audio?
@ArxivInsights7 жыл бұрын
CodeEmporium That's correct! :) The mic is a SmartLav+
@rylaczero37406 жыл бұрын
Arxiv Insights @CodeEmporium (above) makes nice videos as well^.^
@RubenNuredini5 жыл бұрын
@1:21 "...with two principal components: ... Encoder... Decoder..." I know that you did it without bad intention but using this terminology may lead to confusion. PCA (Principal Component Analysis) is also used for dimensionality reduction and often compared to autoencoders. In PCA world the term "principal components" has really significant meaning. By the way, great video and keep up with the outstanding work!!!
@nohandlepleasethanks7 жыл бұрын
Great explanations. This filled two crucial gaps in my understanding of VAEs, and introduced me to beta-VAEs.
@dimitryversteele24107 жыл бұрын
Great video ! Very clear and understandable explanaitions of hard to understand topics.
@davidm.johnston89946 жыл бұрын
Great videos man, keep them going, you're gonna find an audience!
@basedgod80976 жыл бұрын
Dang, I actually understand this stuff lol. I think Im gonna binge watch all your videos once my exams finish. Thanks man :)
@ArxivInsights6 жыл бұрын
based god That's the goal, making hardcore ML stuff accessible! You're very welcome :p Good luck with your exams ;)
@sethagastya5 жыл бұрын
This was an amazing video! Thanks man. Will stay tuned for more!
@adityamalte4766 жыл бұрын
Really appreciate your effort of simplifying research papers for viewers.Keep it up.I want more such videos
@MeauxTarabein6 жыл бұрын
Very Helpfully Arxiv! keep the good Quality videos coming
@joshbrenneman5 жыл бұрын
Wow, love your videos. I have not worked with reinforcement learning, but I’d love to hear your analysis of other generative models.
@double_j38676 жыл бұрын
Subscribed. Very useful -- i'm an applied ML researcher (applying these techniques to real-world problems) so I need a way to quickly "scan" methods and determine what may be useful before diving in-depth. These styles of videos are exactly what I need.
@vortexZXR4 жыл бұрын
So many ideas come to mind after watching this video. Well done!
@ashokkannan936 жыл бұрын
I would like to see more videos from you. Clear explanation of concept and gentle presentation of math. Great job!
@venkatbuvana6 жыл бұрын
Thanks a lot for sharing such a succinct summarization of VAEs. Very helpful!
@matthewbascom4 жыл бұрын
I like the subtle distinction you made between the disentangled variational auto-encoder versus the normal variational auto-encoder: Changing the first dimension in the latent space of the disentangled version rotates the face while leaving everything else in the image unchanged. But changing the first dimension in the normal version not only rotates the image, but changes other features as well. Thank you. Me gleaning that distinction from Higgins, et al. Beta-VAE Deepmind paper would be unlikely...
@DanielWeikert6 жыл бұрын
Great work. Thanks a lot! Highly appreciate your effort. Creating these videos takes time but I still hope you will continue.
@alexandermarshall3725 жыл бұрын
Great video! I am still a bit confused about the advantage of using a VAE over a normal (deterministic auto encoder). As far as I understand (assuming you have 2 classes/labels for your data), your input data gets mapped to a vector in the latent space. In the deterministic case, you have one point in this space for each image, in the VAE case you have an n-dim Gaussian distribution (say an ellipse in a 2D latent space) for each image. However, in the end you want the point (or ellipses) corresponding to different classes to cluster in different regions of your latent space. So ideally you end up with 2 separate cluster. Why is it better to have 2 clusters made of ellipses than 2 clusters made of points. Is it just the area of the latent space that they cover (which is bigger for an ellipse than for a point)? Or is it a deeper meaning? Thank you!
@ansahsiddiqui1384 Жыл бұрын
Hey its been 4 years, but as far as I think variational AE help resolving the discontuinity problem and plus as you mentioned they cover a greater area reducing bias and the problem of generating data from "holes" or empty spaces. Let me knof if what Im saying makes sense lol
@kristyleist33184 жыл бұрын
This is great! Keep going, we need you! Don't stop making amazing videos like this
@mdjamaluddin_ntb5 жыл бұрын
Good explanation. Enough and relevant math that support the explanation which we can understand the insight.
@ChocolateMilkCultLeader2 жыл бұрын
Shared your work with my followers. Keep making amazing content
@nildiertjimenez74864 жыл бұрын
One minute watching this video is enought to be a new subscriber! Awesome
@dippatel17397 жыл бұрын
your videos are awesome don't lose track bcuz of subscribers.
@wy25285 жыл бұрын
mapping the latent vectors is really smart
@leonliang91854 жыл бұрын
I rarely like videos on youtube but this video is so freaking good for beginners like me!
@mrdbourke6 жыл бұрын
Epic video Xander! I learned a lot from your explanation. Now to try an implement some code!
@snippletrap6 жыл бұрын
Sublime text editor is so aesthetic. Anyway, yes, great point, the input dimensionality needs to be reduced. Even the original Atari DeepMind breakthrough relied on a smaller (handcrafted) representation of the pixel data. With the disentangled variational autoencoder it may be feasible or even an improvement to deal with the full input.
@davidenders9107 Жыл бұрын
Thank you! This was comprehensive and comprehendible.
@phattran48585 жыл бұрын
Thank you very much, I was trying to understand it, but it's much easier when I found this video!
@user-or7ji5hv8y5 жыл бұрын
Your explanation is so clear.
@satishbanka4 жыл бұрын
Very good explanation of Variational Autoencoders! Kudos!
@ck18472 жыл бұрын
Thanks, this video clarified many things from the original paper.
@hitarthk6 жыл бұрын
Absolutely great stuff Arxiv Insights! Subscribed to your videos for life :)
@maxhorowitz-gelb60926 жыл бұрын
Wow! Great video. Very concise and easy to understand something quite complex.
@tamerius16 жыл бұрын
You're explaining this very well! Finally an explanation on an AI technique that's easy to follow and understand. Thank you.
@emanehab5106 жыл бұрын
Don't stop making amazing videos like this
@hedwinbonnavaud69983 жыл бұрын
Nice video, I learned a lot ty. I have some questions avout the loss btw. When you show the code, we can see Sum( x * tf.log(y) + (1 - x) * tf.log((1 - y)) ): - Isn't it the equivalent of -CEL(x, y) - CEL(1-x, 1-y) (with CEL = CrossEntropyLoss) ?? - Calling log(y) assume that your reconstructed data y is entirely positive. Do you put a Relu or a sigmoid function at the output of your network ? What's happend if the input data (x) is negative sometimes ? - Can this part of the loss can be replaced by BCELoss or MSELoss ? Sry for my english I'm not native.
@LeNudel4 жыл бұрын
Thanks for your explanation. My brain was broken after reading the Paper :)
@TheRohr6 жыл бұрын
Great thanks for the video and the paper explanation! Really, really helpful, keep that paper explanation content!
@seattle-bfg6 жыл бұрын
Really really awesome channel!!! Look forward to watching more of your videos!
@darkmath1006 жыл бұрын
"Very clever" Yes, in fact pure genius. Who came up with the solution you're talking about at 8:30? Which paper is it in? Was it one person who thought of that?
@elreoldewage66747 жыл бұрын
This is really great! Thanks for sharing. I think it would be very informative if you linked to a few of the papers related to the concepts in the video (for those who want to slough through dry text after being sufficiently intrigued by the video).
@ArxivInsights7 жыл бұрын
Elre Oldewage Really good point, I'll add the links tonight!
@elreoldewage66747 жыл бұрын
Thanks :)
@bernardfinucane20617 жыл бұрын
It would be interesting to apply this to word embeddings. There is the well know example that king-queen = man-woman, (so king-man+woman = queen), but the question that immediately comes up is what are the "real" semantic dimensions. I don't think there is an answer to this in the short term, because of the homonym problem, but it is interesting to think that this kind of network could discover such abstract features.
@yanfengliu6 жыл бұрын
This is really good. I like the way you explain things. Thank you for sharing!
@DeveloperTharun6 жыл бұрын
Amazing! Thank God! I was researching on Recurrent Variational AE and badly wanted to understand Var AE, Thank You!! I understood a lot! Please Please, Please make more videos!!!!
@dubfather521 Жыл бұрын
Finally someone explains it welll without writing in the alien math language that I don't care to learn.
@lingling14116 ай бұрын
i will ace this topic in my exam with this one! thanks, man!
@forgotaboutbre3 ай бұрын
Thank god I got a Masters in CS. I could be wrong but I do Imagine these topics are much harder to follow without decades of technical education.