Variational Auto Encoder (VAE) - Theory

  Рет қаралды 22,596

Meerkat Statistics

Meerkat Statistics

Күн бұрын

VAE's are a mix between VI and Auto Encoders NN. They are used mainly for generating new data. In this video we will outline the theory behind the original paper, including looking at regular Auto Encoders, Variational Inference, and how they mix together to create VAE.
Original Paper (Kingma & Welling 2014): arxiv.org/pdf/...
The first and only Variational Inference (VI) course on-line!
Become a member and get full access to this online course:
meerkatstatist...
** 🎉 Special KZbin 60% Discount on Yearly Plan - valid for the 1st 100 subscribers; Voucher code: First100 🎉 **
“VI in R” Course Outline:
Administration
Administration
Intro
Intuition - what is VI?
Notebook - Intuition
Origin, Outline, Context
KL Divergence
KL Introduction
KL - Extra Intuition
Notebook - KL - Exercises
Notebook - KL - Additional Topics
KL vs. Other Metrics
VI vs. ML
VI (using KL) vs. Maximum Likelihood
ELBO & “Mean Field”
ELBO
“Mean Field” Approximation
Coordinate Ascent VI (CAVI)
Coordinate Ascent VI (CAVI)
Functional Derivative & Euler-Lagrange Equation
CAVI - Toy Example
CAVI - Bayesian GMM Example
Notebook - Normal-Gamma Conjugate Prior
Notebook - Bayesian GMM - Unknown Precision
Notebook - Image Denoising (Ising Model)
Exponential Family
CAVI for the Exponential Family
Conjugacy in the Exponential Family
Notebook - Latent Dirichlet Allocations Example
VI vs. EM
VI vs. EM
Stochastic VI / Advanced VI
SVI - Review
SVI for Exponential Family
Automatic Differentiation VI (ADVI)
Notebook - ADVI Example (using STAN)
Black Box VI (BBVI)
Notebook - BBVI Example
Expectation Propagation
Forward vs. Reverse KL
Expectation Propagation
Variational Auto Encoder
Why become a member?
All video content
Extra material (notebooks)
Access to code and notes
Community Discussion
No Ads
Support the Creator ❤️
VI (restricted) playlist: bit.ly/389QSm1
If you’re looking for statistical consultation, someone to work on interesting projects, or give training workshops, visit my website meerkatstatist... or contact me directly at david@meerkatstatistics.com
~~~~~ SUPPORT ~~~~~
Paypal me: paypal.me/Meer...
~~~~~~~~~~~~~~~~~
Intro/Outro Music: Dreamer - by Johny Grimes
• Johny Grimes - Dreamer

Пікірлер: 23
@paedrufernando2351
@paedrufernando2351 11 ай бұрын
@6:10 VI starts.The run down was awesome..puts eveything into perspective
@evgeniyazarov4230
@evgeniyazarov4230 Жыл бұрын
Great explanation! The two ways of looking on the loss function is insightful
@YICHAOCAI
@YICHAOCAI 11 ай бұрын
Fantastic video! This effectively resolved my queries.
@Omsip123
@Omsip123 2 ай бұрын
Thanks for your efforts, very well explained
@nabinbk1065
@nabinbk1065 Ай бұрын
great, thank you so much
@123sendodo4
@123sendodo4 Жыл бұрын
Very clear and useful information!
@stazizov
@stazizov 7 ай бұрын
Could you please tell me if there is a mistake in the notation? @8:26 z_{i} = z_{l}?
@MeerkatStatistics
@MeerkatStatistics 7 ай бұрын
Hey, yes of course. Sorry for the typo.
@stazizov
@stazizov 7 ай бұрын
​@@MeerkatStatistics Thank you so much) Great video!!! 🔥
@minuklee6735
@minuklee6735 8 ай бұрын
Thank you for the awesome video! I have a question @11:35. I don't clearly understand why g_\theta takes x. am I correct that it does not take x if g_\theta is a gaussian distribution? as it will just be g_\theta(\epsilon) = \sigma*\epsilon + \mu (where \sigma and \mu comes from \theta)?? Again, I appreciate your video a lot!
@MeerkatStatistics
@MeerkatStatistics 7 ай бұрын
Although not explicitly denoted, q(z) is also dependent on the data. This is why g(theta) will usually also be depending on x. I didn't want to write q(z|x) as in the paper, because it is not a posterior, but rather a distribution who's parameters you tweak until it reaches the true posterior p(z|x). I have a simple example (for the CAVI algorithm) on my website (for members) meerkatstatistics.com/courses/variational-inference-in-r/lessons/cavi-toy-example/ and also a bit more elaborate example free on KZbin kzbin.info/www/bejne/bnXdeoOQo79kaM0si=8Un505QqOEtij9XV - in both cases you'll see a q(z) that is a Gaussian, but whose parameters depend on the data x.
@tassangherman
@tassangherman 4 ай бұрын
You're awesome !
@shounakdesai4283
@shounakdesai4283 10 ай бұрын
awesome video.
@evaggelosantypas5139
@evaggelosantypas5139 Жыл бұрын
Hey great video, thank you for your efforts. Is it possible to get your slides ?
@MeerkatStatistics
@MeerkatStatistics Жыл бұрын
Thanks. The slides are offered on my website meerkatstatistics.com/courses/variational-inference-in-r/lessons/variational-auto-encoder-theory/ for members. Please consider subscribing to also support this channel.
@evaggelosantypas5139
@evaggelosantypas5139 Жыл бұрын
@@MeerkatStatistics ok thnx
@marcospiotto9755
@marcospiotto9755 6 ай бұрын
What is the difference between denoting p_theta (x|z) vs p(x|z,theta) ?
@MeerkatStatistics
@MeerkatStatistics 6 ай бұрын
I think "subscript" theta is just the standard way of denoting when we are optimizing theta, that is we are changing theta. While "conditioned on" theta is usually when the theta's are given. Also note that the subscript theta refers to the NN parameters, while often the "conditioned on" refers to distributional parameters. I don't think these are rules set in stone, though, and I'm not an expert in notation. As long as you understand what's going on - that's the important part.
@marcospiotto9755
@marcospiotto9755 6 ай бұрын
@@MeerkatStatistics got it, thanks!
@SpringhallBess-g1b
@SpringhallBess-g1b 2 ай бұрын
Lindgren Expressway
@LauraJohnson-f3v
@LauraJohnson-f3v 2 ай бұрын
Price Tunnel
@JeffreyParsons-h5u
@JeffreyParsons-h5u 2 ай бұрын
Peyton Place
@BackBaird-y8l
@BackBaird-y8l 2 ай бұрын
Mertie Flat
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 516 М.
小路飞和小丑也太帅了#家庭#搞笑 #funny #小丑 #cosplay
00:13
家庭搞笑日记
Рет қаралды 12 МЛН
Variational Inference | Evidence Lower Bound (ELBO) | Intuition & Visualization
25:06
Machine Learning & Simulation
Рет қаралды 73 М.
The Reparameterization Trick
17:35
ML & DL Explained
Рет қаралды 24 М.
NeRF: Neural Radiance Fields
32:17
Mashaan Alshammari
Рет қаралды 67
Understanding Variational Autoencoders (VAEs) | Deep Learning
29:54
Variational Autoencoders - EXPLAINED!
17:36
CodeEmporium
Рет қаралды 144 М.
Evidence Lower Bound (ELBO) - CLEARLY EXPLAINED!
11:33
Kapil Sachdeva
Рет қаралды 29 М.
CS 285: Lecture 18, Variational Inference, Part 1
20:13
Variational Autoencoders (VAEs) By Ali Ghodsi
1:01:55
Deep Learning Boston
Рет қаралды 2,2 М.
Simple Explanation of AutoEncoders
10:31
WelcomeAIOverlords
Рет қаралды 111 М.
VQ-VAEs: Neural Discrete Representation Learning | Paper + PyTorch Code Explained
34:38
Aleksa Gordić - The AI Epiphany
Рет қаралды 47 М.