Variational Autoencoders

  Рет қаралды 515,046

Arxiv Insights

Arxiv Insights

Күн бұрын

Пікірлер: 489
@abaybektursun
@abaybektursun 6 жыл бұрын
Variational Autoencoders starts at 5:40
@pouyan74
@pouyan74 4 жыл бұрын
You just saved five minutes of my life!
@Moaaz-SWE
@Moaaz-SWE 3 жыл бұрын
@@pouyan74 no the first part was necessary...
@selmanemohamed5146
@selmanemohamed5146 3 жыл бұрын
@@Moaaz-SWE you think someone would enter a video about Variational Autoencoders if he doesn't know what Autoencoders are
@Moaaz-SWE
@Moaaz-SWE 3 жыл бұрын
@@selmanemohamed5146 yeah i did... 😂😂😂 and i was lucky he explained both 😎🙌😅 + the difference between them and that's the important part
@Moaaz-SWE
@Moaaz-SWE 3 жыл бұрын
@Otis Rohan Interested
@atticusmreynard
@atticusmreynard 6 жыл бұрын
This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.
@vindieu
@vindieu Жыл бұрын
Except for "Gaussian" that is weirdly russian pronunced "khaussian" wat?
@arkaung
@arkaung 6 жыл бұрын
This guy does a real job of explaining things rather than hyping up things like "some other people".
@malharjajoo7393
@malharjajoo7393 5 жыл бұрын
are you referring to Siraj Raval? lol
@mubangansofu7469
@mubangansofu7469 2 жыл бұрын
@@malharjajoo7393 lol
@adityakapoor3237
@adityakapoor3237 2 жыл бұрын
This guy was a VAE to the VAE explanation. Really need more of such explanations with the growing literature! Thanks!
@ambujmittal6824
@ambujmittal6824 4 жыл бұрын
Your way of simplifying things is truly amazing! We really need more people like you!
@MonaJalal
@MonaJalal 4 жыл бұрын
hands down this was the best autoencoder and variational autoencoder tutorial I found on Web.
@jingwangphysics
@jingwangphysics 2 жыл бұрын
The beta-VAE seems enforcing a sparse representation. It magically picks the mostly relevant latent variables. I am glad that you mentioned ‘causal’, because that’s probably how our brain deals with high dimensional data. When resources are limited (corresponding to use large beta), the best representation turns out to be a causal model. Fascinating! Thanks
@JakubArnold
@JakubArnold 6 жыл бұрын
Great explanation on why we actually need the reparameterization trick. Everyone just skims over that and explains the part that mu+var*N(0,1) = N(mu,var), but ignores the part why you need it. Good job!
@SamWestby
@SamWestby 3 жыл бұрын
Three years later and this still the best VAE video I've seen. Thanks Xander!
@akshayshrivastava97
@akshayshrivastava97 4 жыл бұрын
Finally, someone who cares their viewers actually get to understand VAEs.
@liyiyuan45
@liyiyuan45 4 жыл бұрын
This is sooooooo useful for 2am and you dragged by all the math in the actually paper. Thanks man for the clear explanation!
@abhinavshaw9112
@abhinavshaw9112 6 жыл бұрын
Hi, I am a Graduate Student at UMass Amherst. I really liked your video, it gave me a lot of ideas. Watching this before reading the paper would really help. Please keep it coming I'll be waiting for more.
@moozzzmann
@moozzzmann 11 ай бұрын
Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work
@debajyotisg
@debajyotisg 5 жыл бұрын
I love your channel. A perfect amount of technicality so as to not scare off beginners, and also keep the intermediates/ experts around. Brilliant.
@fktudiablo9579
@fktudiablo9579 4 жыл бұрын
always the best place to have a good overview before diving deeper
@isaiasprestes
@isaiasprestes 6 жыл бұрын
Great! No BS, strait and plain English! That`s what I want!! :) Congratulations!
@agatinogiulianomirabella6590
@agatinogiulianomirabella6590 3 жыл бұрын
Best explanation found on the internet so far. Congratulations!
@Zahlenteufel1
@Zahlenteufel1 2 жыл бұрын
Bro this was insanely helpful! I'm writing my thesis and am missing a lot of the basics in a lot of relevant areas. Great summary!
@paradoxicallyexcellent5138
@paradoxicallyexcellent5138 5 жыл бұрын
I was very interested in this topic, read the paper, watched some videos, read some blogs. This is by far the best explanation I've come across. You add a lot of value here to the original rapper's contribution. It could even be said you auto-encoded it for my consumption ;)
@Maximfromparapet
@Maximfromparapet 6 жыл бұрын
This is a LIT channel for watching alongside papers. Thanks
@rylaczero3740
@rylaczero3740 6 жыл бұрын
Bloody nicely explained than the Stanford people. Subscribed to the channel, I remember watching your first video on Alpha, but didn't subscribed then, I hope there will be more content on channel with same level of quality, otherwise its hard for people to stick around when the reward is sparse.
@kiwianaDJ
@kiwianaDJ 6 жыл бұрын
what a gem of a channel I have found her...
@obadajabassini3552
@obadajabassini3552 6 жыл бұрын
A really great talk! I have been reading about VAE a lot and this video helps me to understand it even better. Thanks!
@rileyrfitzpatrick
@rileyrfitzpatrick 5 жыл бұрын
Im always intimidated when he says it is going to be technical, but then he explains it so concisely.
@ejkitchen
@ejkitchen 6 жыл бұрын
Your videos are quite good. I am sure you will get an audience in no time if you continue. Thank you so much for making these videos. I like the style you use a lot and love the time format (not too short and long enough to do a good overview dive). Well done.
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Thank you very much for supporting me man! New video is in the making, I expect to upload it hopefully somewhere next week :)
@TheJysN
@TheJysN 3 жыл бұрын
I hand such a hart time understanding the Reparameterization trick, now i finally got it. Thanks for the great explanation. Would love to see more Videos from you.
@aryanirvaan
@aryanirvaan 3 жыл бұрын
Dude what a next level genius you are! You made them so easy to be understood, and just look at the quality of the content. Damn bro!🎀
@515nathaniel
@515nathaniel 4 жыл бұрын
"You cannot push gradients through a sampling node" TensorFlow: *HOLD MY BEER!*
@falsiofalsissimo5313
@falsiofalsissimo5313 6 жыл бұрын
We needed a serious and technical channel about latest findings in DL. That siraj crap is useless. Keep going! Awesome
@sunnybeta_
@sunnybeta_ 6 жыл бұрын
This video suddenly popped up today morning on my home page. Now i know my Sunday will be great. :D
@antonalexandrov4159
@antonalexandrov4159 2 жыл бұрын
Just found your channel and I realize how with some passion and effort you explain things better than some of my professors. Of course, you don't go into too much detail but putting together the big picture comprehensively is valuable and not everyone can do it.
@shivamutreja6427
@shivamutreja6427 2 жыл бұрын
Your videos are absolute crackin for a quick revision before an interview!
@famouspeople3499
@famouspeople3499 3 жыл бұрын
Great video, better than many tutor lessons in university, this animation and simplified the things with simple words
@abylayamanbayev8403
@abylayamanbayev8403 2 жыл бұрын
Finally I understood the intuition of sampling from mu and sigma and reparameterization trick. Thanks!
@bradknox
@bradknox 4 жыл бұрын
Great video! I have a minor correction: At 6:14, calling the cursive L a "loss" might be a misnomer, since loss is something we almost always want to minimize, and the formula of (reconstruction likelihood - KL divergence) should be maximized. In fact, the Kingma and Welling paper call that term the "(variational) lower bound on the marginal likelihood of datapoint i", not a loss.
@robinranabhat3125
@robinranabhat3125 6 жыл бұрын
Don't you ever stop explaining papers like this. Better than Siraj's video. Just explain the code part a bit longer. And your channel is set.
@pablonapan4698
@pablonapan4698 6 жыл бұрын
exactly. show some more code please.
@shrangisoni8758
@shrangisoni8758 5 жыл бұрын
Yea we can't really do much until we code and see results ourselves.
@pixel7038
@pixel7038 5 жыл бұрын
Siraj has improved his videos and provides more content. Don’t be stuck in the past ;)
@gagegolish9306
@gagegolish9306 5 жыл бұрын
@@shrangisoni8758 He's explained the fundamental concepts, you can take those concepts and translate them to code. He shouldn't have to do that for you.
@dalchemistt7
@dalchemistt7 5 жыл бұрын
@@pixel7038 Please stop spreading his name. He has faked his way more than enough already. Read more here: twitter.com/AndrewM_Webb/status/1183150368945049605 and here www.reddit.com/r/learnmachinelearning/comments/dheo88/siraj_raval_admits_to_the_plagiarism_claims/ And what really bugs me is not the plagiarism- that's bad and shameful in itself- but the level of stupidity this guys had shown while plagiarizing- "gates" to "doors" and "complex Hilbert space" to "complicated Hilbert space".
@AjithKumar-gk7bf
@AjithKumar-gk7bf 5 жыл бұрын
Just found this channel ... today... one word Brilliant...!!!
@mshonle
@mshonle Ай бұрын
I’ve come from the future of 2024 to say this is a great, comprehensive video!
@ujjalkrdutta7854
@ujjalkrdutta7854 6 жыл бұрын
Really liked it. Firstly giving an intuition of the concept, its application and then to the objective function while explaining its individual terms, in a way everyone can understand, it was simply professional and elegant. Nice work and thanks!
@giorgiozannini5626
@giorgiozannini5626 3 жыл бұрын
wait how did I not know of this channel. Beautiful explanation, perfectly clear. Thanks for the awesome work!
@nabeelyoosuf
@nabeelyoosuf 5 жыл бұрын
Your explanations are quite insightful and flawless. You are are a gifted explainer! Thanks for sharing them. Please keep sharing more.
@basedgod8097
@basedgod8097 6 жыл бұрын
Dang, I actually understand this stuff lol. I think Im gonna binge watch all your videos once my exams finish. Thanks man :)
@ArxivInsights
@ArxivInsights 6 жыл бұрын
based god That's the goal, making hardcore ML stuff accessible! You're very welcome :p Good luck with your exams ;)
@lisbeth04
@lisbeth04 5 жыл бұрын
I love you. I spent so long on this and couldn't understand the intuition behind it, with this video I understood immediately. Thanks
@DistortedV12
@DistortedV12 5 жыл бұрын
This was very lucid. You are gifted at explaining things!
@from-chimp-to-champ1
@from-chimp-to-champ1 2 жыл бұрын
You help so much with my exams, thanks man, subscribed for more high quality stuff!
@kalehermit
@kalehermit 5 жыл бұрын
Thank you very much, this is the first time I understand the benefit of reparameterization trick.
@nohandlepleasethanks
@nohandlepleasethanks 6 жыл бұрын
Great explanations. This filled two crucial gaps in my understanding of VAEs, and introduced me to beta-VAEs.
@dimitryversteele2410
@dimitryversteele2410 6 жыл бұрын
Great video ! Very clear and understandable explanaitions of hard to understand topics.
@RubenNuredini
@RubenNuredini 5 жыл бұрын
@1:21 "...with two principal components: ... Encoder... Decoder..." I know that you did it without bad intention but using this terminology may lead to confusion. PCA (Principal Component Analysis) is also used for dimensionality reduction and often compared to autoencoders. In PCA world the term "principal components" has really significant meaning. By the way, great video and keep up with the outstanding work!!!
@dippatel1739
@dippatel1739 6 жыл бұрын
your videos are awesome don't lose track bcuz of subscribers.
@double_j3867
@double_j3867 6 жыл бұрын
Subscribed. Very useful -- i'm an applied ML researcher (applying these techniques to real-world problems) so I need a way to quickly "scan" methods and determine what may be useful before diving in-depth. These styles of videos are exactly what I need.
@reinerwilhelms-tricarico344
@reinerwilhelms-tricarico344 5 жыл бұрын
Great! Crisply clear explanations in such a short time.
@iwanttobreakfree701
@iwanttobreakfree701 8 ай бұрын
6 years ago and I now use this video as a guidance to understanding StableDiffusion
@commenterdek3241
@commenterdek3241 8 ай бұрын
an you help me out as well? I have so many questions but no one to answer them.
@ashokkannan93
@ashokkannan93 5 жыл бұрын
Excellent video!! Probably the best VAE video I saw. Thanks a lot :)
@ativjoshi1049
@ativjoshi1049 6 жыл бұрын
Your explanation is crisp and to the point. Thanks.
@achakraborti
@achakraborti 6 жыл бұрын
First video I see from this channel. Immediately subscribed!
@satishbanka
@satishbanka 3 жыл бұрын
Very good explanation of Variational Autoencoders! Kudos!
@RANJEETTHAKUR1983
@RANJEETTHAKUR1983 6 жыл бұрын
Impressive description. Great content...... Hey, Siraj, someone is here :P
@vortexZXR
@vortexZXR 4 жыл бұрын
So many ideas come to mind after watching this video. Well done!
@MeauxTarabein
@MeauxTarabein 6 жыл бұрын
Very Helpfully Arxiv! keep the good Quality videos coming
@hcgaron
@hcgaron 6 жыл бұрын
I discovered your channel today and I'm hooked! Excellent work. Thank you so much for your hard work
@leonliang9185
@leonliang9185 3 жыл бұрын
I rarely like videos on youtube but this video is so freaking good for beginners like me!
@adityamalte476
@adityamalte476 6 жыл бұрын
Really appreciate your effort of simplifying research papers for viewers.Keep it up.I want more such videos
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader 2 жыл бұрын
Shared your work with my followers. Keep making amazing content
@ashokkannan93
@ashokkannan93 6 жыл бұрын
I would like to see more videos from you. Clear explanation of concept and gentle presentation of math. Great job!
@davidm.johnston8994
@davidm.johnston8994 6 жыл бұрын
Great videos man, keep them going, you're gonna find an audience!
@LeNudel
@LeNudel 4 жыл бұрын
Thanks for your explanation. My brain was broken after reading the Paper :)
@sethagastya
@sethagastya 5 жыл бұрын
This was an amazing video! Thanks man. Will stay tuned for more!
@elreoldewage6674
@elreoldewage6674 6 жыл бұрын
This is really great! Thanks for sharing. I think it would be very informative if you linked to a few of the papers related to the concepts in the video (for those who want to slough through dry text after being sufficiently intrigued by the video).
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Elre Oldewage Really good point, I'll add the links tonight!
@elreoldewage6674
@elreoldewage6674 6 жыл бұрын
Thanks :)
@antoinesueur9289
@antoinesueur9289 6 жыл бұрын
Great content ! The format and delivery is perfect, hope to see more of these videos :) . Are you planning on doing a video on Capsule Networks in the future ?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
More videos are definitely coming, the next one will be on novel state-of-the-art methods in Reinforcement Learning! I don't plan on making a video on Capsule Nets since there is an amazingly good video by Aurélien Géron on that topic and there's no way I can explain it any better than he did, no need to reinvent the wheel :p Here is his video: kzbin.info/www/bejne/poGxaZdmephsZpI
@nildiertjimenez7486
@nildiertjimenez7486 4 жыл бұрын
One minute watching this video is enought to be a new subscriber! Awesome
@snippletrap
@snippletrap 6 жыл бұрын
Sublime text editor is so aesthetic. Anyway, yes, great point, the input dimensionality needs to be reduced. Even the original Atari DeepMind breakthrough relied on a smaller (handcrafted) representation of the pixel data. With the disentangled variational autoencoder it may be feasible or even an improvement to deal with the full input.
@samyogdhital
@samyogdhital 2 ай бұрын
bro please continue making these videos. we love you and we want you to return and make same kind of video on various research papers on ai, robotics and all. Please reply if you see this comment. :D
@DanielWeikert
@DanielWeikert 6 жыл бұрын
Great work. Thanks a lot! Highly appreciate your effort. Creating these videos takes time but I still hope you will continue.
@tamerius1
@tamerius1 6 жыл бұрын
You're explaining this very well! Finally an explanation on an AI technique that's easy to follow and understand. Thank you.
@SeanLGoldberg
@SeanLGoldberg 6 жыл бұрын
Great episode. Came here for a good explanation of VAEs, but was blown away when you dug into Beta-VAEs and the Deepmind RL paper. Have you read the group's newest paper "SCAN" on combining Beta-VAEs with symbol representation and manipulation?
@ArxivInsights
@ArxivInsights 6 жыл бұрын
Sean Goldberg Haven't had the time yet, it's somewhere in my 658 open chrome tabs though :p
@1apiano
@1apiano 6 жыл бұрын
I liked arxiv.org/pdf/1709.05047.pdf more, but the SCAN paper is also cool. BTW compliments for your channel, it's the only Deep Learning channel which is worth following.
@matthewbascom
@matthewbascom 4 жыл бұрын
I like the subtle distinction you made between the disentangled variational auto-encoder versus the normal variational auto-encoder: Changing the first dimension in the latent space of the disentangled version rotates the face while leaving everything else in the image unchanged. But changing the first dimension in the normal version not only rotates the image, but changes other features as well. Thank you. Me gleaning that distinction from Higgins, et al. Beta-VAE Deepmind paper would be unlikely...
@jeremyuzan1169
@jeremyuzan1169 3 жыл бұрын
Excellent Content. I'm using VAE for music generation. Your explainations are very interesting. Thanks again Jeremy Uzan IRCAM Paris
@wy2528
@wy2528 5 жыл бұрын
mapping the latent vectors is really smart
@mdjamaluddin_ntb
@mdjamaluddin_ntb 5 жыл бұрын
Good explanation. Enough and relevant math that support the explanation which we can understand the insight.
@venkatbuvana
@venkatbuvana 5 жыл бұрын
Thanks a lot for sharing such a succinct summarization of VAEs. Very helpful!
@Minitomate
@Minitomate 5 жыл бұрын
This technology could be really useful for the police to identify suspects. I really liked your video!
@seattle-bfg
@seattle-bfg 6 жыл бұрын
Really really awesome channel!!! Look forward to watching more of your videos!
@bernardfinucane2061
@bernardfinucane2061 6 жыл бұрын
It would be interesting to apply this to word embeddings. There is the well know example that king-queen = man-woman, (so king-man+woman = queen), but the question that immediately comes up is what are the "real" semantic dimensions. I don't think there is an answer to this in the short term, because of the homonym problem, but it is interesting to think that this kind of network could discover such abstract features.
@xandermay7705
@xandermay7705 6 жыл бұрын
Holy crap, another Xander interested in machine learning :D
@kristyleist3318
@kristyleist3318 4 жыл бұрын
This is great! Keep going, we need you! Don't stop making amazing videos like this
@mrdbourke
@mrdbourke 6 жыл бұрын
Epic video Xander! I learned a lot from your explanation. Now to try an implement some code!
@davidenders9107
@davidenders9107 Жыл бұрын
Thank you! This was comprehensive and comprehendible.
@maxhorowitz-gelb6092
@maxhorowitz-gelb6092 6 жыл бұрын
Wow! Great video. Very concise and easy to understand something quite complex.
@ck1847
@ck1847 2 жыл бұрын
Thanks, this video clarified many things from the original paper.
@TheRohr
@TheRohr 6 жыл бұрын
Great thanks for the video and the paper explanation! Really, really helpful, keep that paper explanation content!
@user-or7ji5hv8y
@user-or7ji5hv8y 5 жыл бұрын
Your explanation is so clear.
@niladrishekhardutt
@niladrishekhardutt 6 жыл бұрын
Great tutorial. Please keep adding more content. Good to see that you are focusing on the technicalities unlike Siraj but I still find that a little more Math would be good. Keep it up!
@lingling1411
@lingling1411 4 ай бұрын
i will ace this topic in my exam with this one! thanks, man!
@HeduAI
@HeduAI 6 жыл бұрын
Amazing explanation to a complicated topic! Thank you so much!!!!
@emanehab510
@emanehab510 6 жыл бұрын
Don't stop making amazing videos like this
@forgotaboutbre
@forgotaboutbre Ай бұрын
Thank god I got a Masters in CS. I could be wrong but I do Imagine these topics are much harder to follow without decades of technical education.
@oguretsagressive
@oguretsagressive 5 жыл бұрын
6:14 - these equations are tough! Looks like mathematical jargon, as if the guys were saying "ok, we don't have time to explain, but you'll figure out".
@p4k7
@p4k7 10 ай бұрын
Great video, and the algorithm is finally recognizing it! Come back and produce more videos?
@phattran4858
@phattran4858 5 жыл бұрын
Thank you very much, I was trying to understand it, but it's much easier when I found this video!
Variational Autoencoders - EXPLAINED!
17:36
CodeEmporium
Рет қаралды 143 М.
Reinforcement Learning with sparse rewards
16:01
Arxiv Insights
Рет қаралды 118 М.
The Ultimate Sausage Prank! Watch Their Reactions 😂🌭 #Unexpected
00:17
La La Life Shorts
Рет қаралды 9 МЛН
Yay😃 Let's make a Cute Handbag for me 👜 #diycrafts #shorts
00:33
LearnToon - Learn & Play
Рет қаралды 117 МЛН
Noodles Eating Challenge, So Magical! So Much Fun#Funnyfamily #Partygames #Funny
00:33
If people acted like cats 🙀😹 LeoNata family #shorts
00:22
LeoNata Family
Рет қаралды 29 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
'How neural networks learn' - Part III: Generalization and Overfitting
22:35
Editing Faces using Artificial Intelligence
25:27
Arxiv Insights
Рет қаралды 372 М.
Understanding Variational Autoencoders (VAEs) | Deep Learning
29:54
Simple Explanation of AutoEncoders
10:31
WelcomeAIOverlords
Рет қаралды 111 М.
Why Unreal Engine 5.5 is a BIG Deal
12:11
Unreal Sensei
Рет қаралды 1,2 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 382 М.
Variational Autoencoders | Generative AI Animated
20:09
Deepia
Рет қаралды 35 М.
An introduction to Reinforcement Learning
16:27
Arxiv Insights
Рет қаралды 662 М.
The Ultimate Sausage Prank! Watch Their Reactions 😂🌭 #Unexpected
00:17
La La Life Shorts
Рет қаралды 9 МЛН