Variational Autoencoders Theory Explained | Generative AI Course

  Рет қаралды 6,418

Datafuse Analytics

Datafuse Analytics

Күн бұрын

Пікірлер: 18
@jayeshkurdekar126
@jayeshkurdekar126 8 ай бұрын
The main takeaway is sensitvising decoder to the joint distribution...cool.great video thanks
@datafuseanalytics
@datafuseanalytics 8 ай бұрын
Thanks a lot brother. I am very happy that you liked it. 👍
@sekhardhana3453
@sekhardhana3453 11 ай бұрын
Thanks for the video bro, much appreciated
@datafuseanalytics
@datafuseanalytics 11 ай бұрын
Hey Sekar, Thanks a lot. Do share with other Data Science Enthusiasts... 😃
@asheeshmathur
@asheeshmathur 9 ай бұрын
Good Video clarifying VAE
@datafuseanalytics
@datafuseanalytics 9 ай бұрын
Thanks a lot Asheesh.. glad it helped you.
@datafuseanalytics
@datafuseanalytics 11 ай бұрын
Hello AI Enthusiasts, In this video I tried to simplify VAE for you all. Do tell me your feedback. 0:00 Introduction to Variational Autoencoders 0:20 Problems with Autoencoder 1:39 Variational Autoencoders to the RESCUE 2:41 What is Mean vector (μ) and Standard deviation (σ)? 3:09 VAE Model Architecture 4:43 How this change in Encoder of VAE helps? 6:27 What we expect vs what we get from VAE? 6:41 Lets welcome a new LOSS function 7:13 Why we need both Reconstruction and KL Loss? 8:20 THE REPARAMETERIZATION TRICK
@theadchannel1168
@theadchannel1168 8 ай бұрын
Really good explanation
@datafuseanalytics
@datafuseanalytics 8 ай бұрын
Thank you. I am glad that you liked the video
@asheeshmathur
@asheeshmathur 9 ай бұрын
As usual the best description of VAE, @04:26, you said it develops mean and variance(its log) for each dimension. Does it refers to each channel of image . Also what is need for z_sigma and epsilon. 1. Is Mean and Variance not enough
@datafuseanalytics
@datafuseanalytics 9 ай бұрын
Hello. In a Variational Autoencoder (VAE), the mean and variance are crucial for defining the parameters of a distribution in the latent space. However, sigma and ϵ (random noise) are essential for the reparameterization trick. This trick enables the network to sample latent representations by decoupling the sampling process from the network's parameters during training. zsigma scales ϵ to match the desired distribution N(μ,σ), making the sampling process differentiable and allowing for effective learning of meaningful latent representations while reconstructing input data.
@Zyx2003
@Zyx2003 9 ай бұрын
Your video is wonderful!You did save my life ,thanks!😀
@datafuseanalytics
@datafuseanalytics 9 ай бұрын
Hey thanks a lot. I am glad that this video was helpful. 👍 😃
@pvtgcn8752
@pvtgcn8752 7 ай бұрын
@datafuseanalytics
@datafuseanalytics 6 ай бұрын
Thanks a lot ❤️
@flakky626
@flakky626 6 ай бұрын
Go in more depth lol VAE's are not something you can just learn with 9 minutes video.. Can you recommend best resource to learn VAE from please?
@mthornit
@mthornit 6 ай бұрын
Jakup Tomczak's deep generative modelling is pretty good. Also Umar Jamil's VAE video
@datafuseanalytics
@datafuseanalytics 6 ай бұрын
Hello, if you want to learn in depth, I would refer this beautiful book - "Generative Deep Learning" by David Foster. (www.oreilly.com/library/view/generative-deep-learning/9781492041931/)
Variational Autoencoders
15:05
Arxiv Insights
Рет қаралды 500 М.
АЗАРТНИК 4 |СЕЗОН 2 Серия
31:45
Inter Production
Рет қаралды 1,1 МЛН
Variational Autoencoders | Generative AI Animated
20:09
Deepia
Рет қаралды 13 М.
The Reparameterization Trick
17:35
ML & DL Explained
Рет қаралды 20 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
VQ-VAEs: Neural Discrete Representation Learning | Paper + PyTorch Code Explained
34:38
Aleksa Gordić - The AI Epiphany
Рет қаралды 45 М.
Variational Autoencoders - EXPLAINED!
17:36
CodeEmporium
Рет қаралды 138 М.
Denoising and Variational Autoencoders
31:46
Serrano.Academy
Рет қаралды 24 М.
АЗАРТНИК 4 |СЕЗОН 2 Серия
31:45
Inter Production
Рет қаралды 1,1 МЛН