*DeepMind x UCL | Deep Learning Lectures | 9/12 | Generative Adversarial Networks (GANs)* *My takeaways:* *1. Overview: why are we interested in GANs **0:25* 1.1 GANs advances 4:22 1.2 Learning an implicit model through a two-player game: discriminator and generator 5:28 -Generator 6:38 -Discriminator 8:03 1.3 Train GAN 9:02 1.4 Unconditional and conditional generative models 41:18 *2. Evaluating GANs **43:52* *3. The GAN Zoo **50:55* 3.1 Image Synthesis with GANs: MNIST to ImageNet 51:46 -The original GANs 52:02 -Conditional GANs 53:16 -Laplacian GANs 54:08 -Deep convolutional GANs 57:30 -Spectrally Normalised GANs 1:00:20 -Projection discriminator 1:01:54 -Self-attention GANs 1:03:12 -BigGANs 1:04:49 -BigGANs-deep 1:11:24 -LOGAN 1:14:12 -Progressive GANs 1:15:38 -StyleGANs 1:16:58 -Summary: from simple images to large-scale database of high-resolution images1:19:23 3.2 GANS for representation learning 1:21:05 -Why GANs? --Motivation example 1: semantics in DCGAN latent space 1:21:28 --Motivation example 2: unsupervised category discovery with BigGANs1:22:16 -InfoGANs 1:23:59 -ALI/bidirectional GANs 1:25:54 -BigBigGANs 1:29:28 *3.3 GANs for other modalities and problems **1:33:05* -Pix2Pix: translate images of two different domains 1:33:18 -CycleGANs: translate images of two different domains 1:34:48 -GANs for audio synthesis: WaveGAN, MelGAN, GAN-TTS 1:36:19 -GANs for video synthesis and predication TGAN-2, DVD-GAN, TriVD-GAN 1:37:19 -GANs are everywhere1:39:10 --Imitation learning: GAIL --Image editing: GauGAN --Program synthesis: SPIRAL --Motion transfer: Everybody dance now --Domain adaptation: DANN --Art: Learning to see
@harshvardhangoyal53623 жыл бұрын
mvp
@leixun3 жыл бұрын
@@harshvardhangoyal5362 Welcome to check out my research on my channel.
@shivtavker4 жыл бұрын
At 17:48 Why does KL(p, p^*) look like that? Divergence will be minimised when we have have p(x) as low as possible. So p can be a distribution that does very bad on both Gaussians.
@CSEAsapannaRakeshRakesh4 жыл бұрын
@10:58 "We only do few steps of SGD for discriminator" Is it 1 k-sized step for 1-epoch (iteration)
@CSEAsapannaRakeshRakesh4 жыл бұрын
@9:17 Why does Binary Cross Entropy function has no negative sign to it?
@CSEAsapannaRakeshRakesh4 жыл бұрын
@10:12 Is it because we are "maximizing" D's prediction accuracy cost(D) = - cost(G)
@kirtipandya46183 жыл бұрын
Can we access code exercises?
@agamemnonc Жыл бұрын
Great lecture, thank you! One small note, I believe the terminology used "distance between two probability distributions" is not quite rigorous. Even KL-divergence is not really a distance metric as it is not symmetric.
@robertfoertsch4 жыл бұрын
Excellent, Added To My Research Library, Sharing Through TheTRUTH Network...
@lukn41003 жыл бұрын
Great lecture and big thanks to DeepMind for sharing this great content.
@awadelrahman4 жыл бұрын
Regardless to the extremely wonderful lecture!!!!! I am always wondering why GAN people have a very similar "talking" style and tone as Goodfellow!! @ Jeff :D ... Thanks a lot ;)
@mathavraj93784 жыл бұрын
Could someone tell me why we call it "latent" noise? latent means something hidden right? so what is being hidden from the input noise?
@haejinsong18354 жыл бұрын
The idea is that latent noise (which is the input to the generator) is not an observable variable. People often use "un-observable" / "hidden" / "latent" to refer to those variables which we do not have observed in the dataset. Cf. if we have a collection of images, the images are observable variables.
@mohitpilkhan70034 жыл бұрын
Its an amazing overview. Loved it very much. Thank you DeepMind and Love you.
@pervezbhan17082 жыл бұрын
kzbin.info/www/bejne/qJC0YmWLfsuAoqc
@GeneralKenobi694204 жыл бұрын
1:31:10 Lol are we just gonna ignore the pic of a woman wearing black latex pants? 👀 (Also do NOT zoom in on that picture in the bottom left... It's like some of the worst nightmare fuel I've ever seen in my life. JFC)
@quosswimblik44893 жыл бұрын
GANs are cool but what can you do with CIANs (clown and identifier adversarial networks networks). So you have one AI trying to identify things and another network trying to fool the identifying AI into making a mistake. The clown AI is trying to find holes in the mindset of the identifier as to give the Identifier a more general fit and is for training identification where as the GAN is the other way round trying to train the generator on a specific imitation task.
@jayanthkumar96373 жыл бұрын
I just loved her voice
@sanjeevi5674 жыл бұрын
Wonderful thanks guys...GANs(Wow)
@Daniel-mj8jt2 жыл бұрын
Excellent lecture!
@luksdoc4 жыл бұрын
A wonderful lecture.
@lizgichora64723 жыл бұрын
Thank you, very interesting work cycleGAN translating domain.
@myoneuralnetwork31884 жыл бұрын
If you'd like a beginner-friendly, easy to read guide to GANs and building them with PyTorch, you might find "Make Your First GAN With PyTorch" useful.. www.amazon.com/dp/B085RNKXPD All the code is open source on github github.com/makeyourownneuralnetwork/gan
@iinarrab194 жыл бұрын
Great. Only feedback is that she needs to master how to speak effectively as in when to properly pause and breath.