Variational Autoencoders

  Рет қаралды 528,674

Arxiv Insights

Arxiv Insights

Күн бұрын

Пікірлер: 491
@abaybektursun
@abaybektursun 7 жыл бұрын
Variational Autoencoders starts at 5:40
@pouyan74
@pouyan74 4 жыл бұрын
You just saved five minutes of my life!
@Moaaz-SWE
@Moaaz-SWE 4 жыл бұрын
@@pouyan74 no the first part was necessary...
@selmanemohamed5146
@selmanemohamed5146 4 жыл бұрын
@@Moaaz-SWE you think someone would enter a video about Variational Autoencoders if he doesn't know what Autoencoders are
@Moaaz-SWE
@Moaaz-SWE 4 жыл бұрын
@@selmanemohamed5146 yeah i did... 😂😂😂 and i was lucky he explained both 😎🙌😅 + the difference between them and that's the important part
@Moaaz-SWE
@Moaaz-SWE 3 жыл бұрын
@Otis Rohan Interested
@atticusmreynard
@atticusmreynard 7 жыл бұрын
This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.
@vindieu
@vindieu Жыл бұрын
Except for "Gaussian" that is weirdly russian pronunced "khaussian" wat?
@arkaung
@arkaung 7 жыл бұрын
This guy does a real job of explaining things rather than hyping up things like "some other people".
@malharjajoo7393
@malharjajoo7393 5 жыл бұрын
are you referring to Siraj Raval? lol
@mubangansofu7469
@mubangansofu7469 3 жыл бұрын
@@malharjajoo7393 lol
@ambujmittal6824
@ambujmittal6824 4 жыл бұрын
Your way of simplifying things is truly amazing! We really need more people like you!
@jingwangphysics
@jingwangphysics 3 жыл бұрын
The beta-VAE seems enforcing a sparse representation. It magically picks the mostly relevant latent variables. I am glad that you mentioned ‘causal’, because that’s probably how our brain deals with high dimensional data. When resources are limited (corresponding to use large beta), the best representation turns out to be a causal model. Fascinating! Thanks
@adityakapoor3237
@adityakapoor3237 2 жыл бұрын
This guy was a VAE to the VAE explanation. Really need more of such explanations with the growing literature! Thanks!
@515nathaniel
@515nathaniel 5 жыл бұрын
"You cannot push gradients through a sampling node" TensorFlow: *HOLD MY BEER!*
@MonaJalal
@MonaJalal 4 жыл бұрын
hands down this was the best autoencoder and variational autoencoder tutorial I found on Web.
@robbertr1558
@robbertr1558 Ай бұрын
Hey man years later and still relevant for my uni course. Amazing conceptual explanation while remaining exact in terminology. Thanks a lot!
@JakubArnold
@JakubArnold 6 жыл бұрын
Great explanation on why we actually need the reparameterization trick. Everyone just skims over that and explains the part that mu+var*N(0,1) = N(mu,var), but ignores the part why you need it. Good job!
@debajyotisg
@debajyotisg 5 жыл бұрын
I love your channel. A perfect amount of technicality so as to not scare off beginners, and also keep the intermediates/ experts around. Brilliant.
@akshayshrivastava97
@akshayshrivastava97 4 жыл бұрын
Finally, someone who cares their viewers actually get to understand VAEs.
@moozzzmann
@moozzzmann Жыл бұрын
Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work
@ejkitchen
@ejkitchen 7 жыл бұрын
Your videos are quite good. I am sure you will get an audience in no time if you continue. Thank you so much for making these videos. I like the style you use a lot and love the time format (not too short and long enough to do a good overview dive). Well done.
@ArxivInsights
@ArxivInsights 7 жыл бұрын
Thank you very much for supporting me man! New video is in the making, I expect to upload it hopefully somewhere next week :)
@liyiyuan45
@liyiyuan45 4 жыл бұрын
This is sooooooo useful for 2am and you dragged by all the math in the actually paper. Thanks man for the clear explanation!
@isaiasprestes
@isaiasprestes 6 жыл бұрын
Great! No BS, strait and plain English! That`s what I want!! :) Congratulations!
@SamWestby
@SamWestby 3 жыл бұрын
Three years later and this still the best VAE video I've seen. Thanks Xander!
@Zahlenteufel1
@Zahlenteufel1 2 жыл бұрын
Bro this was insanely helpful! I'm writing my thesis and am missing a lot of the basics in a lot of relevant areas. Great summary!
@agatinogiulianomirabella6590
@agatinogiulianomirabella6590 3 жыл бұрын
Best explanation found on the internet so far. Congratulations!
@rylaczero3740
@rylaczero3740 6 жыл бұрын
Bloody nicely explained than the Stanford people. Subscribed to the channel, I remember watching your first video on Alpha, but didn't subscribed then, I hope there will be more content on channel with same level of quality, otherwise its hard for people to stick around when the reward is sparse.
@paradoxicallyexcellent5138
@paradoxicallyexcellent5138 5 жыл бұрын
I was very interested in this topic, read the paper, watched some videos, read some blogs. This is by far the best explanation I've come across. You add a lot of value here to the original rapper's contribution. It could even be said you auto-encoded it for my consumption ;)
@fktudiablo9579
@fktudiablo9579 4 жыл бұрын
always the best place to have a good overview before diving deeper
@abhinavshaw9112
@abhinavshaw9112 6 жыл бұрын
Hi, I am a Graduate Student at UMass Amherst. I really liked your video, it gave me a lot of ideas. Watching this before reading the paper would really help. Please keep it coming I'll be waiting for more.
@life.efficient
@life.efficient 7 жыл бұрын
This is a LIT channel for watching alongside papers. Thanks
@TheJysN
@TheJysN 3 жыл бұрын
I hand such a hart time understanding the Reparameterization trick, now i finally got it. Thanks for the great explanation. Would love to see more Videos from you.
@aryanirvaan
@aryanirvaan 3 жыл бұрын
Dude what a next level genius you are! You made them so easy to be understood, and just look at the quality of the content. Damn bro!🎀
@ujjalkrdutta7854
@ujjalkrdutta7854 6 жыл бұрын
Really liked it. Firstly giving an intuition of the concept, its application and then to the objective function while explaining its individual terms, in a way everyone can understand, it was simply professional and elegant. Nice work and thanks!
@antonalexandrov4159
@antonalexandrov4159 2 жыл бұрын
Just found your channel and I realize how with some passion and effort you explain things better than some of my professors. Of course, you don't go into too much detail but putting together the big picture comprehensively is valuable and not everyone can do it.
@AhladKumar
@AhladKumar 6 жыл бұрын
In this Lecture we will gain insight into the working of Variational autoencoders (VAE). Its difference from simple autoencoders will also be explained. This video is a third part in a mini lecture series on Variational Auto-Encoders which is divided into six lectures. kzbin.info/www/bejne/j3nPlYF5ZriNjM0
@giorgiozannini5626
@giorgiozannini5626 3 жыл бұрын
wait how did I not know of this channel. Beautiful explanation, perfectly clear. Thanks for the awesome work!
@rileyrfitzpatrick
@rileyrfitzpatrick 5 жыл бұрын
Im always intimidated when he says it is going to be technical, but then he explains it so concisely.
@kiwianaDJ
@kiwianaDJ 6 жыл бұрын
what a gem of a channel I have found her...
@nabeelyoosuf
@nabeelyoosuf 5 жыл бұрын
Your explanations are quite insightful and flawless. You are are a gifted explainer! Thanks for sharing them. Please keep sharing more.
@famouspeople3499
@famouspeople3499 3 жыл бұрын
Great video, better than many tutor lessons in university, this animation and simplified the things with simple words
@robinranabhat3125
@robinranabhat3125 7 жыл бұрын
Don't you ever stop explaining papers like this. Better than Siraj's video. Just explain the code part a bit longer. And your channel is set.
@pablonapan4698
@pablonapan4698 6 жыл бұрын
exactly. show some more code please.
@shrangisoni8758
@shrangisoni8758 6 жыл бұрын
Yea we can't really do much until we code and see results ourselves.
@pixel7038
@pixel7038 5 жыл бұрын
Siraj has improved his videos and provides more content. Don’t be stuck in the past ;)
@gagegolish9306
@gagegolish9306 5 жыл бұрын
@@shrangisoni8758 He's explained the fundamental concepts, you can take those concepts and translate them to code. He shouldn't have to do that for you.
@dalchemistt7
@dalchemistt7 5 жыл бұрын
@@pixel7038 Please stop spreading his name. He has faked his way more than enough already. Read more here: twitter.com/AndrewM_Webb/status/1183150368945049605 and here www.reddit.com/r/learnmachinelearning/comments/dheo88/siraj_raval_admits_to_the_plagiarism_claims/ And what really bugs me is not the plagiarism- that's bad and shameful in itself- but the level of stupidity this guys had shown while plagiarizing- "gates" to "doors" and "complex Hilbert space" to "complicated Hilbert space".
@tamask001
@tamask001 4 жыл бұрын
If you want to dive one level deeper and understand the reparametrization thick, check out the NYU course: kzbin.info/www/bejne/bYPFZaZvrLOCo8U
@shivamutreja6427
@shivamutreja6427 2 жыл бұрын
Your videos are absolute crackin for a quick revision before an interview!
@lisbeth04
@lisbeth04 5 жыл бұрын
I love you. I spent so long on this and couldn't understand the intuition behind it, with this video I understood immediately. Thanks
@mshonle
@mshonle 3 ай бұрын
I’ve come from the future of 2024 to say this is a great, comprehensive video!
@falsiofalsissimo5313
@falsiofalsissimo5313 6 жыл бұрын
We needed a serious and technical channel about latest findings in DL. That siraj crap is useless. Keep going! Awesome
@abylayamanbayev8403
@abylayamanbayev8403 2 жыл бұрын
Finally I understood the intuition of sampling from mu and sigma and reparameterization trick. Thanks!
@bradknox
@bradknox 4 жыл бұрын
Great video! I have a minor correction: At 6:14, calling the cursive L a "loss" might be a misnomer, since loss is something we almost always want to minimize, and the formula of (reconstruction likelihood - KL divergence) should be maximized. In fact, the Kingma and Welling paper call that term the "(variational) lower bound on the marginal likelihood of datapoint i", not a loss.
@iwanttobreakfree701
@iwanttobreakfree701 11 ай бұрын
6 years ago and I now use this video as a guidance to understanding StableDiffusion
@commenterdek3241
@commenterdek3241 10 ай бұрын
an you help me out as well? I have so many questions but no one to answer them.
@sunnybeta_
@sunnybeta_ 6 жыл бұрын
This video suddenly popped up today morning on my home page. Now i know my Sunday will be great. :D
@kalehermit
@kalehermit 5 жыл бұрын
Thank you very much, this is the first time I understand the benefit of reparameterization trick.
@AjithKumar-gk7bf
@AjithKumar-gk7bf 5 жыл бұрын
Just found this channel ... today... one word Brilliant...!!!
@hcgaron
@hcgaron 6 жыл бұрын
I discovered your channel today and I'm hooked! Excellent work. Thank you so much for your hard work
@achakraborti
@achakraborti 6 жыл бұрын
First video I see from this channel. Immediately subscribed!
@ashokkannan93
@ashokkannan93 6 жыл бұрын
Excellent video!! Probably the best VAE video I saw. Thanks a lot :)
@ativjoshi1049
@ativjoshi1049 6 жыл бұрын
Your explanation is crisp and to the point. Thanks.
@reinerwilhelms-tricarico344
@reinerwilhelms-tricarico344 5 жыл бұрын
Great! Crisply clear explanations in such a short time.
@from-chimp-to-champ1
@from-chimp-to-champ1 2 жыл бұрын
You help so much with my exams, thanks man, subscribed for more high quality stuff!
@DistortedV12
@DistortedV12 5 жыл бұрын
This was very lucid. You are gifted at explaining things!
@CodeEmporium
@CodeEmporium 7 жыл бұрын
Very well made. You use a GoPro Hero 5 for recording video, correct? What mic do you use for the audio?
@ArxivInsights
@ArxivInsights 7 жыл бұрын
CodeEmporium That's correct! :) The mic is a SmartLav+
@rylaczero3740
@rylaczero3740 6 жыл бұрын
Arxiv Insights @CodeEmporium (above) makes nice videos as well^.^
@RubenNuredini
@RubenNuredini 5 жыл бұрын
@1:21 "...with two principal components: ... Encoder... Decoder..." I know that you did it without bad intention but using this terminology may lead to confusion. PCA (Principal Component Analysis) is also used for dimensionality reduction and often compared to autoencoders. In PCA world the term "principal components" has really significant meaning. By the way, great video and keep up with the outstanding work!!!
@nohandlepleasethanks
@nohandlepleasethanks 7 жыл бұрын
Great explanations. This filled two crucial gaps in my understanding of VAEs, and introduced me to beta-VAEs.
@dimitryversteele2410
@dimitryversteele2410 7 жыл бұрын
Great video ! Very clear and understandable explanaitions of hard to understand topics.
@davidm.johnston8994
@davidm.johnston8994 6 жыл бұрын
Great videos man, keep them going, you're gonna find an audience!
@basedgod8097
@basedgod8097 6 жыл бұрын
Dang, I actually understand this stuff lol. I think Im gonna binge watch all your videos once my exams finish. Thanks man :)
@ArxivInsights
@ArxivInsights 6 жыл бұрын
based god That's the goal, making hardcore ML stuff accessible! You're very welcome :p Good luck with your exams ;)
@sethagastya
@sethagastya 5 жыл бұрын
This was an amazing video! Thanks man. Will stay tuned for more!
@adityamalte476
@adityamalte476 6 жыл бұрын
Really appreciate your effort of simplifying research papers for viewers.Keep it up.I want more such videos
@MeauxTarabein
@MeauxTarabein 6 жыл бұрын
Very Helpfully Arxiv! keep the good Quality videos coming
@joshbrenneman
@joshbrenneman 5 жыл бұрын
Wow, love your videos. I have not worked with reinforcement learning, but I’d love to hear your analysis of other generative models.
@double_j3867
@double_j3867 6 жыл бұрын
Subscribed. Very useful -- i'm an applied ML researcher (applying these techniques to real-world problems) so I need a way to quickly "scan" methods and determine what may be useful before diving in-depth. These styles of videos are exactly what I need.
@vortexZXR
@vortexZXR 4 жыл бұрын
So many ideas come to mind after watching this video. Well done!
@ashokkannan93
@ashokkannan93 6 жыл бұрын
I would like to see more videos from you. Clear explanation of concept and gentle presentation of math. Great job!
@venkatbuvana
@venkatbuvana 6 жыл бұрын
Thanks a lot for sharing such a succinct summarization of VAEs. Very helpful!
@matthewbascom
@matthewbascom 4 жыл бұрын
I like the subtle distinction you made between the disentangled variational auto-encoder versus the normal variational auto-encoder: Changing the first dimension in the latent space of the disentangled version rotates the face while leaving everything else in the image unchanged. But changing the first dimension in the normal version not only rotates the image, but changes other features as well. Thank you. Me gleaning that distinction from Higgins, et al. Beta-VAE Deepmind paper would be unlikely...
@DanielWeikert
@DanielWeikert 6 жыл бұрын
Great work. Thanks a lot! Highly appreciate your effort. Creating these videos takes time but I still hope you will continue.
@alexandermarshall372
@alexandermarshall372 5 жыл бұрын
Great video! I am still a bit confused about the advantage of using a VAE over a normal (deterministic auto encoder). As far as I understand (assuming you have 2 classes/labels for your data), your input data gets mapped to a vector in the latent space. In the deterministic case, you have one point in this space for each image, in the VAE case you have an n-dim Gaussian distribution (say an ellipse in a 2D latent space) for each image. However, in the end you want the point (or ellipses) corresponding to different classes to cluster in different regions of your latent space. So ideally you end up with 2 separate cluster. Why is it better to have 2 clusters made of ellipses than 2 clusters made of points. Is it just the area of the latent space that they cover (which is bigger for an ellipse than for a point)? Or is it a deeper meaning? Thank you!
@ansahsiddiqui1384
@ansahsiddiqui1384 Жыл бұрын
Hey its been 4 years, but as far as I think variational AE help resolving the discontuinity problem and plus as you mentioned they cover a greater area reducing bias and the problem of generating data from "holes" or empty spaces. Let me knof if what Im saying makes sense lol
@kristyleist3318
@kristyleist3318 4 жыл бұрын
This is great! Keep going, we need you! Don't stop making amazing videos like this
@mdjamaluddin_ntb
@mdjamaluddin_ntb 5 жыл бұрын
Good explanation. Enough and relevant math that support the explanation which we can understand the insight.
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader 2 жыл бұрын
Shared your work with my followers. Keep making amazing content
@nildiertjimenez7486
@nildiertjimenez7486 4 жыл бұрын
One minute watching this video is enought to be a new subscriber! Awesome
@dippatel1739
@dippatel1739 7 жыл бұрын
your videos are awesome don't lose track bcuz of subscribers.
@wy2528
@wy2528 5 жыл бұрын
mapping the latent vectors is really smart
@leonliang9185
@leonliang9185 4 жыл бұрын
I rarely like videos on youtube but this video is so freaking good for beginners like me!
@mrdbourke
@mrdbourke 6 жыл бұрын
Epic video Xander! I learned a lot from your explanation. Now to try an implement some code!
@snippletrap
@snippletrap 6 жыл бұрын
Sublime text editor is so aesthetic. Anyway, yes, great point, the input dimensionality needs to be reduced. Even the original Atari DeepMind breakthrough relied on a smaller (handcrafted) representation of the pixel data. With the disentangled variational autoencoder it may be feasible or even an improvement to deal with the full input.
@davidenders9107
@davidenders9107 Жыл бұрын
Thank you! This was comprehensive and comprehendible.
@phattran4858
@phattran4858 5 жыл бұрын
Thank you very much, I was trying to understand it, but it's much easier when I found this video!
@user-or7ji5hv8y
@user-or7ji5hv8y 5 жыл бұрын
Your explanation is so clear.
@satishbanka
@satishbanka 4 жыл бұрын
Very good explanation of Variational Autoencoders! Kudos!
@ck1847
@ck1847 2 жыл бұрын
Thanks, this video clarified many things from the original paper.
@hitarthk
@hitarthk 6 жыл бұрын
Absolutely great stuff Arxiv Insights! Subscribed to your videos for life :)
@maxhorowitz-gelb6092
@maxhorowitz-gelb6092 6 жыл бұрын
Wow! Great video. Very concise and easy to understand something quite complex.
@tamerius1
@tamerius1 6 жыл бұрын
You're explaining this very well! Finally an explanation on an AI technique that's easy to follow and understand. Thank you.
@emanehab510
@emanehab510 6 жыл бұрын
Don't stop making amazing videos like this
@hedwinbonnavaud6998
@hedwinbonnavaud6998 3 жыл бұрын
Nice video, I learned a lot ty. I have some questions avout the loss btw. When you show the code, we can see Sum( x * tf.log(y) + (1 - x) * tf.log((1 - y)) ): - Isn't it the equivalent of -CEL(x, y) - CEL(1-x, 1-y) (with CEL = CrossEntropyLoss) ?? - Calling log(y) assume that your reconstructed data y is entirely positive. Do you put a Relu or a sigmoid function at the output of your network ? What's happend if the input data (x) is negative sometimes ? - Can this part of the loss can be replaced by BCELoss or MSELoss ? Sry for my english I'm not native.
@LeNudel
@LeNudel 4 жыл бұрын
Thanks for your explanation. My brain was broken after reading the Paper :)
@TheRohr
@TheRohr 6 жыл бұрын
Great thanks for the video and the paper explanation! Really, really helpful, keep that paper explanation content!
@seattle-bfg
@seattle-bfg 6 жыл бұрын
Really really awesome channel!!! Look forward to watching more of your videos!
@darkmath100
@darkmath100 6 жыл бұрын
"Very clever" Yes, in fact pure genius. Who came up with the solution you're talking about at 8:30? Which paper is it in? Was it one person who thought of that?
@elreoldewage6674
@elreoldewage6674 7 жыл бұрын
This is really great! Thanks for sharing. I think it would be very informative if you linked to a few of the papers related to the concepts in the video (for those who want to slough through dry text after being sufficiently intrigued by the video).
@ArxivInsights
@ArxivInsights 7 жыл бұрын
Elre Oldewage Really good point, I'll add the links tonight!
@elreoldewage6674
@elreoldewage6674 7 жыл бұрын
Thanks :)
@bernardfinucane2061
@bernardfinucane2061 7 жыл бұрын
It would be interesting to apply this to word embeddings. There is the well know example that king-queen = man-woman, (so king-man+woman = queen), but the question that immediately comes up is what are the "real" semantic dimensions. I don't think there is an answer to this in the short term, because of the homonym problem, but it is interesting to think that this kind of network could discover such abstract features.
@yanfengliu
@yanfengliu 6 жыл бұрын
This is really good. I like the way you explain things. Thank you for sharing!
@DeveloperTharun
@DeveloperTharun 6 жыл бұрын
Amazing! Thank God! I was researching on Recurrent Variational AE and badly wanted to understand Var AE, Thank You!! I understood a lot! Please Please, Please make more videos!!!!
@dubfather521
@dubfather521 Жыл бұрын
Finally someone explains it welll without writing in the alien math language that I don't care to learn.
@lingling1411
@lingling1411 6 ай бұрын
i will ace this topic in my exam with this one! thanks, man!
@forgotaboutbre
@forgotaboutbre 3 ай бұрын
Thank god I got a Masters in CS. I could be wrong but I do Imagine these topics are much harder to follow without decades of technical education.
Variational Autoencoders - EXPLAINED!
17:36
CodeEmporium
Рет қаралды 148 М.
Reinforcement Learning with sparse rewards
16:01
Arxiv Insights
Рет қаралды 119 М.
OCCUPIED #shortssprintbrasil
0:37
Natan por Aí
Рет қаралды 131 МЛН
Who is More Stupid? #tiktok #sigmagirl #funny
0:27
CRAZY GREAPA
Рет қаралды 10 МЛН
Почему Катар богатый? #shorts
0:45
Послезавтра
Рет қаралды 2 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,6 МЛН
Variational Inference | Evidence Lower Bound (ELBO) | Intuition & Visualization
25:06
Machine Learning & Simulation
Рет қаралды 77 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 445 М.
Simple Explanation of AutoEncoders
10:31
WelcomeAIOverlords
Рет қаралды 116 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Variational Autoencoders | Generative AI Animated
20:09
Deepia
Рет қаралды 58 М.
The Reparameterization Trick
17:35
ML & DL Explained
Рет қаралды 26 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,6 МЛН
An introduction to Reinforcement Learning
16:27
Arxiv Insights
Рет қаралды 672 М.
OCCUPIED #shortssprintbrasil
0:37
Natan por Aí
Рет қаралды 131 МЛН