L17.4 Variational Autoencoder Loss Function

  Рет қаралды 9,727

Sebastian Raschka

Sebastian Raschka

3 жыл бұрын

Slides: sebastianraschka.com/pdf/lect...
-------
This video is part of my Introduction of Deep Learning course.
Next video: • L17.5 A Variational Au...
The complete playlist: • Intro to Deep Learning...
A handy overview page with links to the materials: sebastianraschka.com/blog/202...
-------
If you want to be notified about future videos, please consider subscribing to my channel: / sebastianraschka

Пікірлер: 14
@desucam7717
@desucam7717 2 жыл бұрын
The best lecture I've heard in terms of VAE. Very much thanks!!!
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Thanks so much for the kind words! Feels really good to hear this :)
@satadrudas3675
@satadrudas3675 2 жыл бұрын
The explaination and reasoning on the reconstruction loss was simple and concise. Thanks
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Glad to hear!
@mustafabuyuk6425
@mustafabuyuk6425 2 жыл бұрын
Thank you for a very clear explanation.
@SebastianRaschka
@SebastianRaschka 2 жыл бұрын
Glad to hear it was useful!
@hophouse8426
@hophouse8426 6 ай бұрын
Thank you for good lecture! but I have one question. Reconstruction term in ELBO is expectation value about z~q(z|x). But when you use MSE loss, it does not seem to be considered. Is there any reason??
@miguelduqueb7065
@miguelduqueb7065 Жыл бұрын
Thank you for the explanation. In minute 7:54, I think the formula for the squared error loss (the L-2 norm) should not contain the squared root.
@jiangshaowen1149
@jiangshaowen1149 Жыл бұрын
Great Lecture! I have a naive question, what's the difference between q and p in Encoder and Decoder? Do they refer to conditional probability? Thanks
@klindatv
@klindatv Жыл бұрын
Is correct to say that with binary cross entropy the decoder will generate slightly different samples instead of using the MSE? So the decoder output is different than the original input?
@SebastianRaschka
@SebastianRaschka Жыл бұрын
yes, the loss function is a hyperparameter and the results will be slightly different.
@maulberto3
@maulberto3 Жыл бұрын
Hi, why the KL divergence between the Z and a Normal(0,1) distribution?
@maulberto3
@maulberto3 Жыл бұрын
I think it's a way to tell the NN to produce nice images, therefore normal, and not produce weird ones as the normal has not big tails. So I guess that if I choose say the t-distribution then I will let the NN produce more weird images than the Normal architecture....
@SebastianRaschka
@SebastianRaschka Жыл бұрын
For a normal distribution, we know how to sample from it. And it's easy to work with mathematically (via the log-var trick). But you could also consider other distributions of course.
버블티로 체감되는 요즘 물가
00:16
진영민yeongmin
Рет қаралды 107 МЛН
ОСКАР vs БАДАБУМЧИК БОЙ!  УВЕЗЛИ на СКОРОЙ!
13:45
Бадабумчик
Рет қаралды 2,9 МЛН
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 170 #shorts
00:27
L18.3: Modifying the GAN Loss Function for Practical Use
18:50
Sebastian Raschka
Рет қаралды 6 М.
Variational Inference | Evidence Lower Bound (ELBO) | Intuition & Visualization
25:06
Machine Learning & Simulation
Рет қаралды 63 М.
The Reparameterization Trick
17:35
ML Explained
Рет қаралды 16 М.
L19.4.2 Self-Attention and Scaled Dot-Product Attention
16:09
Sebastian Raschka
Рет қаралды 20 М.
Evidence Lower Bound (ELBO) - CLEARLY EXPLAINED!
11:33
Kapil Sachdeva
Рет қаралды 25 М.
From Autoencoders to Variational Autoencoders: Improving the Encoder
26:19
Valerio Velardo - The Sound of AI
Рет қаралды 11 М.
L19.5.1 The Transformer Architecture
22:36
Sebastian Raschka
Рет қаралды 16 М.
Variational Auto Encoder (VAE) - Theory
26:17
Meerkat Statistics
Рет қаралды 17 М.
When you have 32GB RAM in your PC
0:12
Deadrig Gaming
Рет қаралды 1,2 МЛН
Собери ПК и Получи 10,000₽
1:00
build monsters
Рет қаралды 2,2 МЛН
PART 52 || DIY Wireless Switch forElectronic Lights - Easy Guide!
1:01
HUBAB__OFFICIAL
Рет қаралды 25 МЛН