Variational Auto Encoder (VAE) - Theory

  Рет қаралды 21,246

Meerkat Statistics

Meerkat Statistics

Күн бұрын

VAE's are a mix between VI and Auto Encoders NN. They are used mainly for generating new data. In this video we will outline the theory behind the original paper, including looking at regular Auto Encoders, Variational Inference, and how they mix together to create VAE.
Original Paper (Kingma & Welling 2014): arxiv.org/pdf/...
The first and only Variational Inference (VI) course on-line!
Become a member and get full access to this online course:
meerkatstatist...
** 🎉 Special KZbin 60% Discount on Yearly Plan - valid for the 1st 100 subscribers; Voucher code: First100 🎉 **
“VI in R” Course Outline:
Administration
Administration
Intro
Intuition - what is VI?
Notebook - Intuition
Origin, Outline, Context
KL Divergence
KL Introduction
KL - Extra Intuition
Notebook - KL - Exercises
Notebook - KL - Additional Topics
KL vs. Other Metrics
VI vs. ML
VI (using KL) vs. Maximum Likelihood
ELBO & “Mean Field”
ELBO
“Mean Field” Approximation
Coordinate Ascent VI (CAVI)
Coordinate Ascent VI (CAVI)
Functional Derivative & Euler-Lagrange Equation
CAVI - Toy Example
CAVI - Bayesian GMM Example
Notebook - Normal-Gamma Conjugate Prior
Notebook - Bayesian GMM - Unknown Precision
Notebook - Image Denoising (Ising Model)
Exponential Family
CAVI for the Exponential Family
Conjugacy in the Exponential Family
Notebook - Latent Dirichlet Allocations Example
VI vs. EM
VI vs. EM
Stochastic VI / Advanced VI
SVI - Review
SVI for Exponential Family
Automatic Differentiation VI (ADVI)
Notebook - ADVI Example (using STAN)
Black Box VI (BBVI)
Notebook - BBVI Example
Expectation Propagation
Forward vs. Reverse KL
Expectation Propagation
Variational Auto Encoder
Why become a member?
All video content
Extra material (notebooks)
Access to code and notes
Community Discussion
No Ads
Support the Creator ❤️
VI (restricted) playlist: bit.ly/389QSm1
If you’re looking for statistical consultation, someone to work on interesting projects, or give training workshops, visit my website meerkatstatist... or contact me directly at david@meerkatstatistics.com
~~~~~ SUPPORT ~~~~~
Paypal me: paypal.me/Meer...
~~~~~~~~~~~~~~~~~
Intro/Outro Music: Dreamer - by Johny Grimes
• Johny Grimes - Dreamer

Пікірлер: 22
@Omsip123
@Omsip123 29 күн бұрын
Thanks for your efforts, very well explained
@paedrufernando2351
@paedrufernando2351 9 ай бұрын
@6:10 VI starts.The run down was awesome..puts eveything into perspective
@evgeniyazarov4230
@evgeniyazarov4230 10 ай бұрын
Great explanation! The two ways of looking on the loss function is insightful
@YICHAOCAI
@YICHAOCAI 9 ай бұрын
Fantastic video! This effectively resolved my queries.
@123sendodo4
@123sendodo4 Жыл бұрын
Very clear and useful information!
@tassangherman
@tassangherman 2 ай бұрын
You're awesome !
@shounakdesai4283
@shounakdesai4283 8 ай бұрын
awesome video.
@minuklee6735
@minuklee6735 6 ай бұрын
Thank you for the awesome video! I have a question @11:35. I don't clearly understand why g_\theta takes x. am I correct that it does not take x if g_\theta is a gaussian distribution? as it will just be g_\theta(\epsilon) = \sigma*\epsilon + \mu (where \sigma and \mu comes from \theta)?? Again, I appreciate your video a lot!
@MeerkatStatistics
@MeerkatStatistics 5 ай бұрын
Although not explicitly denoted, q(z) is also dependent on the data. This is why g(theta) will usually also be depending on x. I didn't want to write q(z|x) as in the paper, because it is not a posterior, but rather a distribution who's parameters you tweak until it reaches the true posterior p(z|x). I have a simple example (for the CAVI algorithm) on my website (for members) meerkatstatistics.com/courses/variational-inference-in-r/lessons/cavi-toy-example/ and also a bit more elaborate example free on KZbin kzbin.info/www/bejne/bnXdeoOQo79kaM0si=8Un505QqOEtij9XV - in both cases you'll see a q(z) that is a Gaussian, but whose parameters depend on the data x.
@evaggelosantypas5139
@evaggelosantypas5139 Жыл бұрын
Hey great video, thank you for your efforts. Is it possible to get your slides ?
@MeerkatStatistics
@MeerkatStatistics Жыл бұрын
Thanks. The slides are offered on my website meerkatstatistics.com/courses/variational-inference-in-r/lessons/variational-auto-encoder-theory/ for members. Please consider subscribing to also support this channel.
@evaggelosantypas5139
@evaggelosantypas5139 Жыл бұрын
@@MeerkatStatistics ok thnx
@marcospiotto9755
@marcospiotto9755 4 ай бұрын
What is the difference between denoting p_theta (x|z) vs p(x|z,theta) ?
@MeerkatStatistics
@MeerkatStatistics 4 ай бұрын
I think "subscript" theta is just the standard way of denoting when we are optimizing theta, that is we are changing theta. While "conditioned on" theta is usually when the theta's are given. Also note that the subscript theta refers to the NN parameters, while often the "conditioned on" refers to distributional parameters. I don't think these are rules set in stone, though, and I'm not an expert in notation. As long as you understand what's going on - that's the important part.
@marcospiotto9755
@marcospiotto9755 4 ай бұрын
@@MeerkatStatistics got it, thanks!
@LauraJohnson-f3v
@LauraJohnson-f3v 17 күн бұрын
Price Tunnel
@SpringhallBess-g1b
@SpringhallBess-g1b 20 күн бұрын
Lindgren Expressway
@stazizov
@stazizov 5 ай бұрын
Could you please tell me if there is a mistake in the notation? @8:26 z_{i} = z_{l}?
@MeerkatStatistics
@MeerkatStatistics 5 ай бұрын
Hey, yes of course. Sorry for the typo.
@stazizov
@stazizov 5 ай бұрын
​@@MeerkatStatistics Thank you so much) Great video!!! 🔥
@JeffreyParsons-h5u
@JeffreyParsons-h5u 10 күн бұрын
Peyton Place
@BackBaird-y8l
@BackBaird-y8l 21 күн бұрын
Mertie Flat
Variational Inference | Evidence Lower Bound (ELBO) | Intuition & Visualization
25:06
Machine Learning & Simulation
Рет қаралды 69 М.
My Daughter's Dumplings Are Filled With Coins #funny #cute #comedy
00:18
Funny daughter's daily life
Рет қаралды 18 МЛН
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 8 МЛН
How do Cats Eat Watermelon? 🍉
00:21
One More
Рет қаралды 12 МЛН
Simple Explanation of AutoEncoders
10:31
WelcomeAIOverlords
Рет қаралды 106 М.
The Reparameterization Trick
17:35
ML & DL Explained
Рет қаралды 20 М.
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 502 М.
Variational Autoencoders | Generative AI Animated
20:09
Deepia
Рет қаралды 16 М.
Evidence Lower Bound (ELBO) - CLEARLY EXPLAINED!
11:33
Kapil Sachdeva
Рет қаралды 28 М.
VQ-VAEs: Neural Discrete Representation Learning | Paper + PyTorch Code Explained
34:38
Aleksa Gordić - The AI Epiphany
Рет қаралды 45 М.
Lecture 21: Variational Autoencoders
1:21:56
Carnegie Mellon University Deep Learning
Рет қаралды 13 М.
Autoencoder In PyTorch - Theory & Implementation
30:00
Patrick Loeber
Рет қаралды 69 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 449 М.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 203 М.
My Daughter's Dumplings Are Filled With Coins #funny #cute #comedy
00:18
Funny daughter's daily life
Рет қаралды 18 МЛН