Variational Autoencoders | Generative AI Animated

  Рет қаралды 57,772

Deepia

Deepia

Күн бұрын

Пікірлер: 193
@essentiallifter
@essentiallifter 4 ай бұрын
this account is seriously underrated, will definitely blow up soon
@melanynadine972
@melanynadine972 4 ай бұрын
Agree, I just subscribed ❤
@joshuat6124
@joshuat6124 3 ай бұрын
Agreed 💯
@authenticallysuperficial9874
@authenticallysuperficial9874 2 ай бұрын
💯
@Higgsinophysics
@Higgsinophysics 4 ай бұрын
I can't believe how calmly and clear you explain difficult topics!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Well that's the magic of text to speech 😁
@viewer8221
@viewer8221 4 ай бұрын
haha I knew it was AI voice
@christaylor-gz6mi
@christaylor-gz6mi Күн бұрын
Love this. You’re going to blow up! Content severely under the radar at the moment
@AtiqurRahman-gj6mg
@AtiqurRahman-gj6mg 4 ай бұрын
Finally, the concept of VAE is clear. Thanks a ton.
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
You're welcome, thanks for the comment !
@xichengwang1319
@xichengwang1319 3 ай бұрын
I first saw this in Chinese media with Chinese subtitles, then just came back to subscribe to the original author. The most clear introduction ever seen with such nice proper animation. It will blow up for sure.
@Deepia-ls2fo
@Deepia-ls2fo 3 ай бұрын
Well thank you ! Can you send me more info about that through my email ? ytdeepia@gmail.com
@anthonyortega358
@anthonyortega358 4 ай бұрын
This channel is amazing, you should be very proud of what you have produced Thibaut!!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks !
@valentinfontanger4962
@valentinfontanger4962 11 күн бұрын
The animation regarding the Bayesian stats, I almost cried. It was beautiful.
@Pokojsamobojcow
@Pokojsamobojcow 4 ай бұрын
Wow it must be a lot of work to obtain such amazing animations! The video is really dynamic and easy to follow, congratulations
@julius4858
@julius4858 4 ай бұрын
The program is called manim, it’s from 3blue1brown
@yibozhong9557
@yibozhong9557 2 күн бұрын
This video is so well explained for a Gen AI beginner! Love it!
@ashutoshpadhi2782
@ashutoshpadhi2782 3 ай бұрын
The amount of effort you put into these works is really commendable. You are a blessing to humanity.
@yusufakmalov-l5n
@yusufakmalov-l5n 2 күн бұрын
Such a great video! I’m always looking forward to your uploads.
@Tunadorable
@Tunadorable 4 ай бұрын
3blue1brown specifically for AI models??? sign me up!!! I'll fs be linking to you in my own vids whenever relevant, this was great
@BenjaminEvans316
@BenjaminEvans316 4 ай бұрын
Great video. Looking forward to your video on contrastive learning, it is my favourite subject in deep learning. Your videos combine great production skills (animations, colour selection, movement between frames) with in-depth understanding of complex concepts.
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks for the kind words !
@Koosx2BestofYoutubeCoc
@Koosx2BestofYoutubeCoc 28 күн бұрын
Litteraly one of the best channels to learn deep learning
@diabl2master
@diabl2master 7 күн бұрын
Brilliant video!! Can't wait for the videos on extensions of VAE!
@babatundeonabajo
@babatundeonabajo 3 ай бұрын
This is a really good video, and the animations are top-notch. I feel this video is good not just for those learning about AI but also those learning Statistics.
@AICoffeeBreak
@AICoffeeBreak 4 ай бұрын
Love the clarification at 00:13, because I've also felt that the misconception is wide spread. I've heard people say: I am not using GANs anymore, I am using Generative AI. The word "Generative" is literally what G in GAN stands for. 😂😂
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
I made this intro because I couldn't stand the number of "GenAI experts" on my LinkedIn feed :(
@muhammed2174
@muhammed2174 3 ай бұрын
You're a master at your craft, it is a testament to your studies!
@Bwaaz
@Bwaaz 4 ай бұрын
Amazing quality, hope the channel takes off ! Great use of manim
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks !
@BeeepBo0op
@BeeepBo0op 4 ай бұрын
Thank you for finally making me understand the reparametrization trick!! It was thrown at me several times during a DRL class I took last year and I never really understood what we did. This made it much much clearer, thank you! Also: great video overall!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Glad it helped !
@MutigerBriefkasten
@MutigerBriefkasten 4 ай бұрын
Thanky you again for the great content and the amazing animations 🎉💪👍 keep going, hopefully your Channel explode with more subscribers .. i will recommend it for sure to other people
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thank you :)
@jiechenhuang9019
@jiechenhuang9019 27 күн бұрын
This video is pure gold. You introduced VAE in such a clear and intuitive way, especially by contrasting the latent space between VAE and normal autoencoders. A question: why does VAE’s latent space have those features like smooth interpolation, which other autoencoders fail to learn? Thank you😊
@Deepia-ls2fo
@Deepia-ls2fo 27 күн бұрын
Thanks, the sort of "continuous" latent space is due to the regularization we impose during training. The latent space is supposed to fit a Gaussian with 0 mean and variance 1, while in regular autoencoders no structure is imposed on the latent space except for its dimension
@jiechenhuang9019
@jiechenhuang9019 26 күн бұрын
@@Deepia-ls2fo Thanks! I'm also a PhD student in CS. I'll definitely recommend this channel to my friends. BTW, is your next videos about diffusion models? Can't wait to see!
@Deepia-ls2fo
@Deepia-ls2fo 26 күн бұрын
@@jiechenhuang9019 Yes the next video will be on diffusion models indeed
@kacperzaleski8125
@kacperzaleski8125 4 ай бұрын
cant wait for the next video! this was great!!!!!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thank you !
@vasoyarutvik2897
@vasoyarutvik2897 20 күн бұрын
video quality is very hight, very informative video. please upload more videos related to machine learning, deep learning and generative AI Keep it up love from india
@ahmedoumar3741
@ahmedoumar3741 Ай бұрын
Your videos and explanations are really excellent, please keep doing this.
@杨卢老
@杨卢老 Ай бұрын
非常感谢着这个视频,解决了困扰我很久的问题!!! 太牛逼了!😍
@Deepia-ls2fo
@Deepia-ls2fo Ай бұрын
You're welcome !
@StratosFair
@StratosFair Ай бұрын
Masterpiece of a video
@TimMattison
@TimMattison Ай бұрын
I'm still a bit too much of a n00b to get all of the math involved but the videos you're creating are great, please keep at it!
@juantarazona1533
@juantarazona1533 15 күн бұрын
Absolute godsend of a video
@drannoc9812
@drannoc9812 4 ай бұрын
Amazing content :D I hope you'll do your next videos on VQ VAE and VQ VAE 2, I enjoyed so much reading those papers !
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks, I really gotta take another look to the paper
@mostafasayahkarajy508
@mostafasayahkarajy508 4 ай бұрын
Thank you very much for providing and sharing the lecture. Excelent explanation and so a high-quality video!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thank you !
@azharafridi9619
@azharafridi9619 18 күн бұрын
really helpfull thank you so much.. please if possible make complete playlist like this... it would be great work for the society. once again thank you so much
@shivakumarkannan9526
@shivakumarkannan9526 27 күн бұрын
Excellent video. Colored diagrams help a lot.
@shangliu6285
@shangliu6285 4 ай бұрын
perfect video, its easy to understand the VAE, subscribed!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks !
@abelhutten4532
@abelhutten4532 4 ай бұрын
Great visualizations, and good explanation! Congratulations and thanks for the nice video :)
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thank you !
@essentiallifter
@essentiallifter 2 ай бұрын
just a tip can you make it super clear that the reason we sample in the middle is to produce a nice continuous latent space where the different dimensions encode different meaning
@diabl2master
@diabl2master 7 күн бұрын
10:10 Excellent visualiation. I really understood it here.
@flueepwrien6587
@flueepwrien6587 4 ай бұрын
Finally I understand this concept.
@abdelmananabdelrahman4099
@abdelmananabdelrahman4099 4 ай бұрын
Great video 🎉. I've never had such a great explanation of VAE. Waiting for VQVAE.....
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thank you !
@Dylan-du1uu
@Dylan-du1uu 28 күн бұрын
Such a great video that helps me a lot. Thanks!
@carlaconti1075
@carlaconti1075 3 ай бұрын
Thanks to you everything is clear now, thank you Deepia
@Deepia-ls2fo
@Deepia-ls2fo 3 ай бұрын
thanks miss conti
@vitorfranca80
@vitorfranca80 4 ай бұрын
Incredible explanation!! Thank you for sharing your knowledge! 😁😁
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks !
@deepniba
@deepniba 2 ай бұрын
Thank you , still looking for VAE variants videos
@authenticallysuperficial9874
@authenticallysuperficial9874 2 ай бұрын
Audio comes out from under water at 2:09 btw
@Deepia-ls2fo
@Deepia-ls2fo 2 ай бұрын
Thank you, I had some issues with copyrighted music which led to KZbin removing it but also degrading the audio...
@hnull99
@hnull99 14 күн бұрын
I thought my headset is bugging xD
@EkalabyaGhosh-g3i
@EkalabyaGhosh-g3i 4 ай бұрын
This channel needs more subscribers
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thank you !
@lucianovidal8721
@lucianovidal8721 3 ай бұрын
Great content! It was a really good explanation
@HenrikVendelbo
@HenrikVendelbo 4 ай бұрын
Thanks
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Ho my, thank you so much !
@HenrikVendelbo
@HenrikVendelbo 4 ай бұрын
I find math speak very hard to grok. I was always good at math, but always got turned off by the navel gazing and geekery. You do a great job keeping it engaging without assuming that I am a math geek
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
@@HenrikVendelbo Yeah sometimes it do be like that in math classes. I think it's important to look at equations when they tell us something about the models, but computational tricks or complex equations are not that interesting.
@cs-cs4mj
@cs-cs4mj 3 ай бұрын
hey, so well explained thanks for the video!! really nailed those animations as well, would be cool to make a video on adam/rmsprop as well, i have a hard time properly understanding why they work. anyway much love to you my friend
@samuelschonenberger
@samuelschonenberger 2 ай бұрын
Watching this while my first VAE is training
@JONK4635
@JONK4635 4 ай бұрын
Really amazing content, thank you for spreading knowledge! Thanks a lot :)
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks for the comment, keeps me motivated :)
@samcoding
@samcoding 7 күн бұрын
This is so clear! Can you please create a video on Sparse Autoencoders?
@Deepia-ls2fo
@Deepia-ls2fo 7 күн бұрын
It's somewhere on the to-do list!
@KennethTrinh-cm6cp
@KennethTrinh-cm6cp 4 ай бұрын
thanks for the wonderful animations and explanation
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks
@griterjaden
@griterjaden 3 ай бұрын
Wowowowowowow 🎉🎉🎉 amazing video for VAE. Pls ~ make more videos
@Deepia-ls2fo
@Deepia-ls2fo 3 ай бұрын
@@griterjaden Thanks, I'm on it :)
@authenticallysuperficial9874
@authenticallysuperficial9874 2 ай бұрын
Thanks!
@EdeYOlorDSZs
@EdeYOlorDSZs 2 ай бұрын
top tier video!
@chriskamaris1372
@chriskamaris1372 4 ай бұрын
Furthermore, in 11:39 and 12:39 you are referencing σ as variance. But is it σ the standard deviation and σ^2 the variance? (Nevertheless, the video is perfect. Excellent work!)
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks, indeed there might be some mistakes !
@hannes7218
@hannes7218 2 ай бұрын
good stuff! keep it going
@nihilsson
@nihilsson 4 ай бұрын
Great vid! Commenting for algorithmical reasons
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks !
@chcyzh
@chcyzh 2 ай бұрын
Thank you very much! It's pretty clear
@chnchn-z9m
@chnchn-z9m 13 күн бұрын
Thank you for your explanation
@5h4n6
@5h4n6 Ай бұрын
Thank you so much foe the video!
@잇준-v7m
@잇준-v7m 14 күн бұрын
so helpful video, i want to see about advanced VAE
@Deepia-ls2fo
@Deepia-ls2fo 14 күн бұрын
Thanks, might revisit VAEs when I'll be talking about latent diffusion :)
@Jay_Tau
@Jay_Tau 4 ай бұрын
This is excellent. Thank you!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks !
@metarestephanois3262
@metarestephanois3262 10 күн бұрын
You said that in the beginning the idea is sampling from p(z|x) gives a latent vector z that has likely come from the original distribution p(x). What does this exactly mean that I value can likely come from a distribution ? Thanks in advance for an answer.
@Deepia-ls2fo
@Deepia-ls2fo 7 күн бұрын
Well it's a good question, this is somewhat of a misuse of language. What it means is that if we take a latent z it should have a high probability with respect to this conditional probability distribution. Knowing how to sample from this distribution means if we have an image from p(x), we should be able to find latents that are probable with respect to this image. Hope this clears things up :)
@metarestephanois3262
@metarestephanois3262 3 күн бұрын
@ Ok I think that helped, thank you
@michael91703
@michael91703 4 ай бұрын
Is this manim?!!! Nice work dude!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
It is indeed Manim, thank you !
@guilhermegomes4517
@guilhermegomes4517 4 ай бұрын
Great Video!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks !
@woowooNeedsFaith
@woowooNeedsFaith Ай бұрын
15:49 - What is convex interpolation?
@Deepia-ls2fo
@Deepia-ls2fo Ай бұрын
Basically a linear interpolation between two points, with "t" in front of one of the points, and "(1-t)" in front of the other. The set of all these points is convex, hence "convex interpolation" :)
@markbuckler4793
@markbuckler4793 4 ай бұрын
Excellent video, I subscribed because of it :)
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
thanks !
@rishidixit7939
@rishidixit7939 2 ай бұрын
At 7:45 why is the assumption for p(z) as a Normal Distribution important ? Without that are further calculations not possible ? At 8:01 why is the posterior assumed to be Gaussian ?
@Deepia-ls2fo
@Deepia-ls2fo 2 ай бұрын
@@rishidixit7939 Hi again, indeed further calculations are intractable without assuming both the prior and the posterior to be Gaussian. Some other research works have replaced these assumptions by other well known distributions such as mixtures of Gaussians, which results in another training objective.
@notu483
@notu483 4 ай бұрын
12:40 we scale by standard deviation not variance
@diabl2master
@diabl2master 7 күн бұрын
4:50 Wouldn't you use subscripts to show they are different functions?
@Deepia-ls2fo
@Deepia-ls2fo 7 күн бұрын
@@diabl2master yes you can ! There are a lot of different conventions in stats
@hnull99
@hnull99 14 күн бұрын
07:05 I don't really get the reasoning here. How is it that a image (x) is generating the latent variables (z). I would have thought that the latent variables generating the images instead.
@Deepia-ls2fo
@Deepia-ls2fo 14 күн бұрын
Well you can generate z from x using the encoder !
@Chadpritai
@Chadpritai 4 ай бұрын
Next video on diffusion models please , thanks in advance ❤
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
It's on the to-do list but the next 3 videos will be about self-supervised learning !
@user-wr4yl7tx3w
@user-wr4yl7tx3w Ай бұрын
Great content
@sillasrocha9623
@sillasrocha9623 4 ай бұрын
Hey, could you make a video talking about swav in unsupervised learning?
@nicolastortora7356
@nicolastortora7356 2 күн бұрын
Hi sir, at 7:54 you said like: "the knowledge of prior P(z) allows to compute the likelihood P(x|z)". This is, in general, not true. But, in this scenario, it seems to be the case and you explain it at 9:40. Am I correct? Actually, we compute the EXPECTATION[ log(P(x|z)] , not really the P(x|z), and we do that by just pixel-wise comparison. 1st Question: Do we like this differnce? Furthermore, the EXPECTATION of a (log)-probability..? This makes me some confusion. 2nd Question: What really is the expectation of a probability? By the way, really really compliments for the video, the animation and the quality of your work, thanks a lot!! Nicolas
@Bikameral
@Bikameral 4 ай бұрын
Great content ! What software are u using to animate ?
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks ! For most animations I use Manim, a python module originally made by Grant Sanderson from 3blue1brown.
@Bikameral
@Bikameral 4 ай бұрын
@@Deepia-ls2fo thank you
@HosseinKhosravipour
@HosseinKhosravipour Ай бұрын
So great.Thanks
@metarestephanois3262
@metarestephanois3262 13 күн бұрын
Thank you for the video but why is the Data reconstruction term L2 ? I thought this is maximizing log likelihood .
@Deepia-ls2fo
@Deepia-ls2fo 13 күн бұрын
Thanks for your comment, for Gaussian noise this is equivalent :)
@maths.visualization
@maths.visualization Ай бұрын
Can you share video code?
@Deepia-ls2fo
@Deepia-ls2fo Ай бұрын
The link is in the description!
@English-bh1ng
@English-bh1ng 4 ай бұрын
It is the best VAE visualization.
@prabaldutta1935
@prabaldutta1935 3 ай бұрын
Amazing Graphics and explanation. I have one question - if we use MNIST dataset (like what is shown in the video) does it mean that the mu and sigma are vectors of dimension 10x1? What if we use a dataset where the number of different classes are unknown? What will be the dimension of mu and sigma in that case?
@Deepia-ls2fo
@Deepia-ls2fo 3 ай бұрын
Thank you, the latent dimension is not directly related to the number of classes in your dataset. In fact a very good encoder could very well classify perfectly the 10 classes on a single dimension, but it makes things way harder to reconstruct for the decoder. As you mention in most datasets we don't even know the number of classes or the number of relevant features, so we just take ad hoc latent dimensions (16, 32) and see if it's enough for the encoder to produce a useful representation, and for the decoder to reconstruct correctly.
@prabaldutta1935
@prabaldutta1935 3 ай бұрын
@@Deepia-ls2fo Thanks a lot for your response. Can't wait for your next video.
@neerajsingh-xf3rp
@neerajsingh-xf3rp Ай бұрын
0:23 does it create data from scratch?
@Deepia-ls2fo
@Deepia-ls2fo Ай бұрын
Yep, basically modern image generation techniques (diffusion models / flow matching) create new data starting from pure noise !
@neerajsingh-xf3rp
@neerajsingh-xf3rp Ай бұрын
@@Deepia-ls2fo does it learn from existing data ? , if yes how does it generate data from scratch , denoising involves learning the state and adding some randomness in that state only 🤔
@cupatelj52
@cupatelj52 4 ай бұрын
great content bro.
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thanks
@aregpetrosyan465
@aregpetrosyan465 3 ай бұрын
This question came to my mind: What would happen if we ignored the encoder part and tried to train only the decoder? For example, by sampling from a standard Gaussian vector and attempting to reconstruct a digit. I don't really understand the purpose of the encoder.
@Deepia-ls2fo
@Deepia-ls2fo 3 ай бұрын
If you don't condition at all the latent space from which you are sampling, I'm not sure the model will be able to learn anything. Here the encoder explicitly approximate the posterior distribution in order for us to then sample from the distribution of images. This is all a theoretical interpretation of course, but learning to reconstruct any digit from pure unconditioned noise seems a bit hard! Diffusion models kind of do it (in image space), but this usually takes a lot of steps. Anyway, the experiment you describe would be very easy to implement, if you want to try it out. :D
@gw1284
@gw1284 12 күн бұрын
Hey, cool animated lecturing, can you share how and what software was used? Thanks
@Deepia-ls2fo
@Deepia-ls2fo 12 күн бұрын
Thanks, I use DaVinci resolve for the video editing, Manim python library for the animations and Elevenslab for the voice !
@gw1284
@gw1284 12 күн бұрын
@Deepia-ls2fo Thank you. Both content and format of your videos are of high quality. Will look into those tools.
@notu483
@notu483 4 ай бұрын
Thanks for the video ❤😊
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Thank you !
@i2c_jason
@i2c_jason 4 ай бұрын
Is there a statistical property or proof that might show a graphRAG "transfer function" to be the same as a VAE or maybe a CVAE? Perhaps in terms of entropy? It would be interesting to make two identical systems, one using a VAE and one using graphRAG, and see if they can match up statistically. I can't shake the idea that software 3.0 might be the more sound approach for developing new GenAI tools vs software 2.0.
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Hi Jason ! Unfortunately I know close to nothing about RAG so I have no idea if what you describe might be feasible. I here about RAG everywhere these days, I should get up to date on that.
@i2c_jason
@i2c_jason 4 ай бұрын
@@Deepia-ls2fo I'd love to hear your take on it if you ever do a deep dive.
@EigenA
@EigenA 2 ай бұрын
Great video. What is your educational background?
@Deepia-ls2fo
@Deepia-ls2fo 2 ай бұрын
Thanks ! Bachelor in math, bachelor in computer science, master in AI/ML, currently doing a PhD in applied maths and deep learning
@EigenA
@EigenA 2 ай бұрын
@ legendary. Good luck on the PhD! I’m 3rd year EE PhD student, you have phenomenal content. Looking forward to watching your channel grow.
@3B1bIQ
@3B1bIQ 3 ай бұрын
🤍Please, can you create a course to learn the manim library from scratch to professionalism, because I need it very much? Please reply ❤😊
@Deepia-ls2fo
@Deepia-ls2fo 3 ай бұрын
Thanks for your comment, I would love to but I have many other topics I want to talk about first, and not much time on my hand! There are very good ressources on KZbin though, if you want to start to learn Manim. :)
@3B1bIQ
@3B1bIQ 3 ай бұрын
@@Deepia-ls2fo Thank you, but I hope that you have enough time to create a course to learn manim, even if there is one video every week, and this will contribute to increasing the number of your views more because your explanation is very beautiful and clear, and I can understand it easily even though I am an Arab 🤍☺️
@awsaf49
@awsaf49 3 ай бұрын
Hey, nice video!
@shashankjha8454
@shashankjha8454 2 ай бұрын
do u use manim for animations ?
@Deepia-ls2fo
@Deepia-ls2fo 2 ай бұрын
@@shashankjha8454 Yes indeed!
@syc52
@syc52 4 ай бұрын
Could you please make a video talking about why diffusion model, GAN, and VQVAE can make the image sharper
@NguyenAn-kf9ho
@NguyenAn-kf9ho 3 ай бұрын
is there we have videos with the same approach, for Reinforcement Learning :D ???? !
@Deepia-ls2fo
@Deepia-ls2fo 3 ай бұрын
Hi, unfortunately I don't know anything about Reinforcement Learning, so I don't think I'll be able to make videos about that any time soon. However, I believe Steve Brunton has very good videos on the topic :)
@rrttttj
@rrttttj 4 ай бұрын
Great video! However, I am slightly confused: For your loss function you are subtracting KL divergence rather than adding it. Wouldn't you want to add it to penalize the difference between the latent distribution and the standard normal distribution? At least, in all implementations I have seen they add KL divergence rather than subtract it. Edit: I understand my mistake now!
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
Hi ! Thanks for the comment, I'm afraid I might have flipped a sign at one point. When you derive the ELBO (which you then maximize via training), there is a minus sign appearing in front of the KL. But in practice you minimize the opposite of this quantity, which is equivalent to minimizing the L2 plus the KL. I hope it's not too confusing. :)
@rrttttj
@rrttttj 4 ай бұрын
@@Deepia-ls2fo Oooooh I understand, so ELBO is the quantity that should be maximized, and you were denoting the ELBO quantity with L(x), not the loss itself. I understand now, thanks!
@homakashefiamiri3749
@homakashefiamiri3749 Ай бұрын
it was great!
@arno7198
@arno7198 4 ай бұрын
DeepIA absolutely killed it with this video on Variational Autoencoders. As a government official, medical doctor, and law PhD, it's not often I come across something that genuinely teaches me something new. But this video? Wow. The way Variational Autoencoders map data to a latent distribution instead of a fixed point, and the balance between reconstruction loss and Kullback-Leibler divergence, was explained so clearly that I picked it up right away. Whether I'm shaping policies, treating patients, or analyzing legal cases, this video added value in ways I didn’t expect. Props to DeepIA for delivering content that even someone as busy (and brilliant) as me can appreciate! And let’s not forget the genius behind it all. Honestly, the mind that creates content like this is nothing short of extraordinary. I don’t say this lightly, but DeepIA might just be the most insightful, brilliant, and generous creator on KZbin. The precision, the depth, the clarity-it’s rare to find someone who can not only understand such complex topics but also make them accessible to mere mortals like us. It’s an honor to witness this level of mastery. Truly, we’re not worthy.
@Deepia-ls2fo
@Deepia-ls2fo 4 ай бұрын
thx 🤖
@martinferrari2903
@martinferrari2903 4 ай бұрын
Rien que ça 🤣
@이승민-o7m
@이승민-o7m 20 күн бұрын
awesome
@frommarkham424
@frommarkham424 Ай бұрын
This vid fye
@PascalYamlome
@PascalYamlome 14 күн бұрын
Wao!! Just Wao!!!
@Deepia-ls2fo
@Deepia-ls2fo 14 күн бұрын
thanks 😁
Contrastive Learning with SimCLR | Deep Learning Animated
14:57
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 3,3 МЛН
Unadjusted Langevin Algorithm | Generative AI Animated
19:35
The Dark Matter of AI [Mechanistic Interpretability]
24:09
Welch Labs
Рет қаралды 148 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 442 М.
AI Is Making You An Illiterate Programmer
27:22
ThePrimeTime
Рет қаралды 151 М.
Understanding Variational Autoencoders (VAEs) | Deep Learning
29:54
How might LLMs store facts | DL7
22:43
3Blue1Brown
Рет қаралды 1 МЛН
Generative Model That Won 2024 Nobel Prize
33:04
Artem Kirsanov
Рет қаралды 263 М.
Denoising Autoencoders | Deep Learning Animated
15:17
Deepia
Рет қаралды 19 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,5 МЛН
Mom Hack for Cooking Solo with a Little One! 🍳👶
00:15
5-Minute Crafts HOUSE
Рет қаралды 23 МЛН