A quick annotation of chapters after first viewing: 00:00:00 - Summary 00:01:00 - GANs 00:17:10 - How do Humans and Animals learn quickly 00:28:05 - Self Supervised Learning 00:32:00 - Sparse Coding Sparce Modeling 01:07:45 - Regularization Through Temporal Consistency 01:12:05 - Variational AE
@alfcnz3 жыл бұрын
Thanks. I haven't got the chance to create the chapter markers yet. I'll do it next week, perhaps.
@ShihgianLee3 жыл бұрын
Thank you, Alf, for uploading new lecture! I finished the 2020 lectures and started reviewing 2021 lectures. I find a different take helps me to understand the topics better!
@alfcnz3 жыл бұрын
Yay! 🥳🥳🥳
@ShihgianLee3 жыл бұрын
@@alfcnz Hi Alf, at 57:27, Yann mentioned there are dataset that the NYU students can use for their SSL project. I was wondering if it is possible to release those to students outside of NYU so that we can try them out as well? 🤔
@alfcnz3 жыл бұрын
It's just a public data set we've reduced in size (image size and number of images). You can get any publicly available data set to run your experiments.
An interesting question about variational approximation - what's inside the "log" is an average (an expectation). Expectations can be approximated by sampling from the distribution - in this case, sampling from q. So why do we need a bound? Why can't we just approximate the integral inside the log by sampling, and then take the log?
@prof_shixo3 жыл бұрын
Thanks for this very informative lecture. Great effort and it is very much appreciated.
@alfcnz3 жыл бұрын
💪🏻💪🏻💪🏻
@petrdvoracek670 Жыл бұрын
Hello, Thank you for sharing such insightful material! Yann frequently points out that pretraining an image classification model on an unsupervised task using GANs doesn't yield the best results (around the 14:15 mark). Could you recommend any scholarly articles that delve into this subject, particularly ones that compare the effectiveness of pretraining using GANs versus other methods, like the Siamese training scheme? Thank you!
@my_master552 жыл бұрын
If this way of making features (58:55 , 1:12:06) is so cool and more "natural" (kinda same as a brain works with visual features), why the research wasn't turned in that direction starting from 2010 when it was proposed? 🤔 I suggest there are some limitations Yann didn't mention? Or the reason is that the topic is still kinda more complex than the usual convolutions? Thanks for the vid, Alfredo and Yann 🤗
@bmahlbrand3 жыл бұрын
Suppose you take the GAN example and make it conditional, do you sample the noise tensors s.t. you sample the same dimensions as before and you concat (or otherwise condition the model) a real condition tensor to it, or do you sample across the channels of the condition as well?
@jadtawil61433 жыл бұрын
at 1:11:40 , how do you know which parts of z to allow to vary, and which to not, exactly? How do you know which parts represent the "objects", and which parts represents the things that are changing, like the location of the objects?
@alfcnz3 жыл бұрын
Hi Jad, that's a good question! You don't 🤷🏼♂️ If you add more inductive bias (enforce partial invariance and partial equivariance of the representation) learning will determine which part of the hidden representation represents _what_ and which _where_. Yann has a few papers on this topic. You should be able to find them online.
@jadtawil61433 жыл бұрын
@@alfcnz thank you Alfredo, and lots of gratitude to this great series.
@НиколайНовичков-е1э3 жыл бұрын
Thank you, Alfredo :) This video is very helpfull for me
@alfcnz3 жыл бұрын
🥳🥳🥳
@reinerwilhelms-tricarico3449 ай бұрын
Looks a bit like a course on alchemy - but I still feel I learned a lot, especially great tricks and acronyms. The big picture is still a bit in the dark, but I'm getting there. ;-)
@alfcnz9 ай бұрын
Hahaha 😅😅😅
@khoaguin3 жыл бұрын
Thank you very much, Alfredo!
@alfcnz3 жыл бұрын
You're very welcome ☺️☺️☺️
@bmahlbrand3 жыл бұрын
Another question, is there a corresponding practicum to the sparse coding portion (LISTA in particular)?
@alfcnz3 жыл бұрын
No. I did mostly fail my only attempt to train a sparse AE, even with target prop. I'm open to supervise anyone interested in giving it a try, though. Feel free to reach out on Discord.