Few-Shot Learning (3/3): Pretraining + Fine-tuning

  Рет қаралды 30,899

Shusen Wang

Shusen Wang

Күн бұрын

Пікірлер: 50
@ammarulhassan5851
@ammarulhassan5851 3 жыл бұрын
What an amazing series on Few-Shot learning! I've seen some papers, such as FUNIT, that discuss the generation of a few-shot image. This is the first time I've looked for few-shot image classification learning. I honestly believe that there is no better explanation than this series. I've just subscribed and look forward to seeing more series similar to these. Many thanks
@zain-ul-abideenbaggera9504
@zain-ul-abideenbaggera9504 Ай бұрын
you are very nice at explaining concepts . absolutely great work. Please expand your videos over more tough concepts
@MicheleMaestrini
@MicheleMaestrini 9 ай бұрын
Thank you so much for this series of lectures and slides. I am doing a thesis on few-shot learning and this has really helped me understand the fundamentals of this algorithm.
@hjiang3456
@hjiang3456 3 жыл бұрын
For sure the best intro to few shot learning one can find on KZbin. Thank you for the great contents. Hope to see more contents like this in English.
@adityasrivastava8903
@adityasrivastava8903 4 жыл бұрын
Want to seriously thankyou for making videos on few shot learning......please try making some videos on few shot generative modelling if possible
@marooncabbagemaroon6532
@marooncabbagemaroon6532 3 жыл бұрын
on of the most explicit few-shot learning lesson !!!!!
@jcrobin1991
@jcrobin1991 3 жыл бұрын
Thank you for the amazing lecture series! I've learnt so much from them. Just one suggestion - it would be very nice if you can point us to some simple code example that applies those fine-tuning techniques mentioned in the lecture which we can play around with to have a better understanding of those concepts. Thanks again!
@gabiryoussef9194
@gabiryoussef9194 4 жыл бұрын
Many thanks for your great work. please extend to other machine learning related topics.
@impulse1712
@impulse1712 6 ай бұрын
What an explanation in detail,loved the way you explain things , thank you very much sir.
@dnnsdd5418
@dnnsdd5418 3 жыл бұрын
Hi! Thank you for a great video, very interesting topic! I have some questions. When you say that we could train the "pretrained CNN" using the siamese network, what is meant by that? Isn't the Siamese network made for embedding and then evaluating? Why would we need to pretrain another CNN using this, when the SNN is already doing it? Or is the "pretrained CNN" another name for the two "twin" CNN-models used in the SNN? Thanks in advance!
@alphonseinbaraj7602
@alphonseinbaraj7602 4 жыл бұрын
Really wonderful and great explanation. Because i used to study books to learn ..but here all references available. Please will you give some tricks and steps for transfer learning techniques and few real time projects also ..My request .. Thanks
@norman9174
@norman9174 Жыл бұрын
I am from India , Such a amazing lecture , it blows my mind how everything works . Now iIunderstand it not something very high class ...its just a bunch of vectors on which we have to deal with .
@banalasaritha570
@banalasaritha570 2 жыл бұрын
amazing explanation.. bow to you👏
@stracci_5698
@stracci_5698 6 ай бұрын
Are the siamese networks not performing a fine-tunning? when the model weights are learned to perform the task?
@fionamukimba1956
@fionamukimba1956 3 жыл бұрын
This is a great lecture prof. Please do a review lecturer on Siamese networks, trends and areas of application. I hope my request and others in the comments will be attended to.
@lchunleo
@lchunleo 2 жыл бұрын
Can.few shot classification replace supervised classification even if there r data available?
@8eck
@8eck 2 жыл бұрын
What if there are 10.000 classes and we need to predict, to which class this quarrel belongs to? We will have to create a matrix of all 10.000 classes the same way as you have shown in your video?
@sheikhshafayat6984
@sheikhshafayat6984 2 жыл бұрын
This video is really really good, wish you make more such!
@MrSupermonkeyman34
@MrSupermonkeyman34 3 жыл бұрын
Does anyone know what the difference in accuracy is if you train the network as a siamese network compared to training in the standard way?
@RyanMcCoppin
@RyanMcCoppin Жыл бұрын
Dude, you are a boss teacher. Thanks for sharing.
@AbhishekSinghSambyal
@AbhishekSinghSambyal 6 ай бұрын
Which app do you use to make presentations? How do you hide some images/arrows in the slides like an animation? Thanks.
@stewartmuchuchuti20
@stewartmuchuchuti20 Жыл бұрын
Awesome. Well explained. Well simplified.
@chanramouliseshadri512
@chanramouliseshadri512 2 жыл бұрын
Thank you Shusen. Great explanation
@AjinkyaGorad
@AjinkyaGorad 11 ай бұрын
Softmax associates while learning, and identifies while inference
@santanubanerjee5479
@santanubanerjee5479 4 ай бұрын
What does it mean when the gradient propagates back to the CNN as well? What is changed in the CNN?
@santanubanerjee5479
@santanubanerjee5479 4 ай бұрын
I think I need to relook CNN parameters!
@8eck
@8eck 2 жыл бұрын
Can we keep 1000 classes as mean vectors of their 1000 images? So 1000 mean vectors for 1000 classes with 1000 images per class.
@EranM
@EranM 2 жыл бұрын
yes you can.
@jasonwang9990
@jasonwang9990 2 жыл бұрын
Amazing tutorials! Absolutely great job!
@t.pranav2834
@t.pranav2834 3 жыл бұрын
Great explaination. Thanks for this series.
@nacho7953
@nacho7953 3 жыл бұрын
do you have any code example?
@impactguide
@impactguide 2 жыл бұрын
Thanks for this great lecture! I really like having a "real world" example, before going into the mathematical details. I was wondering something though... Is taking the mean of the support set vectors always justified or a good idea, or is there some possible generalization? As an example, for some bird species the males are brightly colored, while females are more plainly colored. If I have both a female and a male bird in my support set, the average of these two might be a "strange" vector which is relatively distant from either a male or female bird. I suspect using the discussed method would probably still work, as far as classifying birds goes, and of course you could work around this by having a female_bird and male_bird class in the support set, and adding the p_j values up. But on the other hand, they are stil examples of the same "thing", i.e. a bird, it's just that the "thing" in question has several forms. Is there some smart way of approaching such a problem?
@haoyin3366
@haoyin3366 4 жыл бұрын
非常感谢!!我很好奇,fine-tuning新建的classifier 是通过影响W还是通过影响CNN网络的weight来影响embedding的呀? 要实现的话是建两个网络吗?谢谢!
@thesisTk
@thesisTk Жыл бұрын
Great series!
@JuanMiguelMoralesOliva
@JuanMiguelMoralesOliva 3 жыл бұрын
Me ha sido realmente útil esta excelente y didáctica explicación, muchas gracias!!
@woddenhorse
@woddenhorse 3 жыл бұрын
Amazing Playlist 🔥🔥
@amoldumrewal
@amoldumrewal 3 жыл бұрын
Hey, really nice series. Great work man! I have one question: Since we are now using cosine similarity the range for inputs for softmax is [-1,1]. This might hinder the max probability for the correct class as it can go upto ~89% in best case scenario. Are you aware of a way by which we can make the probabilities for a 100% sure prediction be equal to ~1.0?
@EranM
@EranM 2 жыл бұрын
take absolute value of cosine similarity
@essuanlive
@essuanlive 3 жыл бұрын
thank you much for a wonderful explanation
@zhalehmanbari6172
@zhalehmanbari6172 2 жыл бұрын
Fantastic 🌸
@williamberriosrojas595
@williamberriosrojas595 3 жыл бұрын
Great videos!! Thanks a lot :)
@felixschmid7849
@felixschmid7849 3 жыл бұрын
Great explanations! Thanks!
@sonninh8987
@sonninh8987 3 жыл бұрын
Great explanation
@himalayasinghsheoran1255
@himalayasinghsheoran1255 4 жыл бұрын
Great explanation.
@serviofernandolimareina5365
@serviofernandolimareina5365 2 жыл бұрын
Excellent!
@feidu11
@feidu11 2 жыл бұрын
Many thanks
@EranM
@EranM 2 жыл бұрын
Did anyone in here implemented this?
@Amir-tg9nf
@Amir-tg9nf 3 жыл бұрын
Thanks a lot
@EranM
@EranM 2 жыл бұрын
Sorry but the fine tuning approach just degraded my model accuracy.
Few-Shot Learning (1/3): Basic Concepts
18:39
Shusen Wang
Рет қаралды 80 М.
Few-Shot Learning (2/3): Siamese Networks
23:41
Shusen Wang
Рет қаралды 58 М.
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН
Мен атып көрмегенмін ! | Qalam | 5 серия
25:41
人是不能做到吗?#火影忍者 #家人  #佐助
00:20
火影忍者一家
Рет қаралды 20 МЛН
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН
Few Shot Learning with Code - Meta Learning - Prototypical Networks
13:50
Lights, Camera, Vision!
Рет қаралды 20 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
BERT for pretraining Transformers
15:53
Shusen Wang
Рет қаралды 13 М.
Zero-Shot Learning - Dr. Timothy Hospedales
29:08
Yandex for ML
Рет қаралды 37 М.
Vision Transformer for Image Classification
14:47
Shusen Wang
Рет қаралды 126 М.
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,8 МЛН
What P vs NP is actually about
17:58
Polylog
Рет қаралды 148 М.
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН