08L - Self-supervised learning and variational inference

  Рет қаралды 9,409

Alfredo Canziani (冷在)

Alfredo Canziani (冷在)

Күн бұрын

Пікірлер: 29
@dialloibu
@dialloibu 3 жыл бұрын
A quick annotation of chapters after first viewing: 00:00:00 - Summary 00:01:00 - GANs 00:17:10 - How do Humans and Animals learn quickly 00:28:05 - Self Supervised Learning 00:32:00 - Sparse Coding Sparce Modeling 01:07:45 - Regularization Through Temporal Consistency 01:12:05 - Variational AE
@alfcnz
@alfcnz 3 жыл бұрын
Thanks. I haven't got the chance to create the chapter markers yet. I'll do it next week, perhaps.
@ShihgianLee
@ShihgianLee 3 жыл бұрын
Thank you, Alf, for uploading new lecture! I finished the 2020 lectures and started reviewing 2021 lectures. I find a different take helps me to understand the topics better!
@alfcnz
@alfcnz 3 жыл бұрын
Yay! 🥳🥳🥳
@ShihgianLee
@ShihgianLee 3 жыл бұрын
@@alfcnz Hi Alf, at 57:27, Yann mentioned there are dataset that the NYU students can use for their SSL project. I was wondering if it is possible to release those to students outside of NYU so that we can try them out as well? 🤔
@alfcnz
@alfcnz 3 жыл бұрын
It's just a public data set we've reduced in size (image size and number of images). You can get any publicly available data set to run your experiments.
@anondoggo
@anondoggo 2 жыл бұрын
Timestamps: 00:00:45 - GANs revisited 00:17:07 - Self-supervised learning: a broader purpose 00:31:59 - Sparse modeling 00:43:25 - Amortized inference 00:51:21 - Convolutional sparse modeling (with group sparsity 00:55:12 - Discriminant recurrent sparse AE 00:57:26 - Other self-supervised learning techniques 00:58:45 - Group sparsity 01:07:47 - Regularization through temporal consistency 01:12:09 - VAE: intuitive interpretation 01:26:13 - VAE: probabilistic variational approximation-based interpretation
@alfcnz
@alfcnz Жыл бұрын
Thanks! ❤️
@alexsht2
@alexsht2 Жыл бұрын
An interesting question about variational approximation - what's inside the "log" is an average (an expectation). Expectations can be approximated by sampling from the distribution - in this case, sampling from q. So why do we need a bound? Why can't we just approximate the integral inside the log by sampling, and then take the log?
@prof_shixo
@prof_shixo 3 жыл бұрын
Thanks for this very informative lecture. Great effort and it is very much appreciated.
@alfcnz
@alfcnz 3 жыл бұрын
💪🏻💪🏻💪🏻
@petrdvoracek670
@petrdvoracek670 Жыл бұрын
Hello, Thank you for sharing such insightful material! Yann frequently points out that pretraining an image classification model on an unsupervised task using GANs doesn't yield the best results (around the 14:15 mark). Could you recommend any scholarly articles that delve into this subject, particularly ones that compare the effectiveness of pretraining using GANs versus other methods, like the Siamese training scheme? Thank you!
@my_master55
@my_master55 2 жыл бұрын
If this way of making features (58:55 , 1:12:06) is so cool and more "natural" (kinda same as a brain works with visual features), why the research wasn't turned in that direction starting from 2010 when it was proposed? 🤔 I suggest there are some limitations Yann didn't mention? Or the reason is that the topic is still kinda more complex than the usual convolutions? Thanks for the vid, Alfredo and Yann 🤗
@bmahlbrand
@bmahlbrand 3 жыл бұрын
Suppose you take the GAN example and make it conditional, do you sample the noise tensors s.t. you sample the same dimensions as before and you concat (or otherwise condition the model) a real condition tensor to it, or do you sample across the channels of the condition as well?
@jadtawil6143
@jadtawil6143 3 жыл бұрын
at 1:11:40 , how do you know which parts of z to allow to vary, and which to not, exactly? How do you know which parts represent the "objects", and which parts represents the things that are changing, like the location of the objects?
@alfcnz
@alfcnz 3 жыл бұрын
Hi Jad, that's a good question! You don't 🤷🏼‍♂️ If you add more inductive bias (enforce partial invariance and partial equivariance of the representation) learning will determine which part of the hidden representation represents _what_ and which _where_. Yann has a few papers on this topic. You should be able to find them online.
@jadtawil6143
@jadtawil6143 3 жыл бұрын
@@alfcnz thank you Alfredo, and lots of gratitude to this great series.
@НиколайНовичков-е1э
@НиколайНовичков-е1э 3 жыл бұрын
Thank you, Alfredo :) This video is very helpfull for me
@alfcnz
@alfcnz 3 жыл бұрын
🥳🥳🥳
@reinerwilhelms-tricarico344
@reinerwilhelms-tricarico344 9 ай бұрын
Looks a bit like a course on alchemy - but I still feel I learned a lot, especially great tricks and acronyms. The big picture is still a bit in the dark, but I'm getting there. ;-)
@alfcnz
@alfcnz 9 ай бұрын
Hahaha 😅😅😅
@khoaguin
@khoaguin 3 жыл бұрын
Thank you very much, Alfredo!
@alfcnz
@alfcnz 3 жыл бұрын
You're very welcome ☺️☺️☺️
@bmahlbrand
@bmahlbrand 3 жыл бұрын
Another question, is there a corresponding practicum to the sparse coding portion (LISTA in particular)?
@alfcnz
@alfcnz 3 жыл бұрын
No. I did mostly fail my only attempt to train a sparse AE, even with target prop. I'm open to supervise anyone interested in giving it a try, though. Feel free to reach out on Discord.
@buoyrina9669
@buoyrina9669 2 жыл бұрын
I wonder how to get as smart as Yann
@alfcnz
@alfcnz 2 жыл бұрын
By gradient descent, of course.
@robinranabhat3125
@robinranabhat3125 Жыл бұрын
01:35:00 is beautiful.
@alfcnz
@alfcnz Жыл бұрын
🤩🤩🤩
07L - PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE
1:54:23
09P - Contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)
56:52
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
Quando eu quero Sushi (sem desperdiçar) 🍣
00:26
Los Wagners
Рет қаралды 15 МЛН
黑天使被操控了#short #angel #clown
00:40
Super Beauty team
Рет қаралды 61 МЛН
07 - Unsupervised learning: autoencoding the targets
56:42
Alfredo Canziani (冷在)
Рет қаралды 8 М.
The Elo Rating System
22:13
j3m
Рет қаралды 31 М.
04L - ConvNet in practice
51:41
Alfredo Canziani (冷在)
Рет қаралды 11 М.
06L - Latent variable EBMs for structured prediction
1:48:54
Alfredo Canziani (冷在)
Рет қаралды 10 М.
2024's Biggest Breakthroughs in Math
15:13
Quanta Magazine
Рет қаралды 431 М.
10P - Non-contrastive joint embedding methods (JEMs) for self-supervised learning (SSL)
1:05:28
I Scraped the Entire Steam Catalog, Here’s the Data
11:29
Newbie Indie Game Dev
Рет қаралды 527 М.
05L - Joint embedding method and latent variable energy based models (LV-EBMs)
1:51:31
Alfredo Canziani (冷在)
Рет қаралды 24 М.
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН