What an amazing series on Few-Shot learning! I've seen some papers, such as FUNIT, that discuss the generation of a few-shot image. This is the first time I've looked for few-shot image classification learning. I honestly believe that there is no better explanation than this series. I've just subscribed and look forward to seeing more series similar to these. Many thanks
@zain-ul-abideenbaggera9504Ай бұрын
you are very nice at explaining concepts . absolutely great work. Please expand your videos over more tough concepts
@MicheleMaestrini9 ай бұрын
Thank you so much for this series of lectures and slides. I am doing a thesis on few-shot learning and this has really helped me understand the fundamentals of this algorithm.
@hjiang34563 жыл бұрын
For sure the best intro to few shot learning one can find on KZbin. Thank you for the great contents. Hope to see more contents like this in English.
@adityasrivastava89034 жыл бұрын
Want to seriously thankyou for making videos on few shot learning......please try making some videos on few shot generative modelling if possible
@marooncabbagemaroon65323 жыл бұрын
on of the most explicit few-shot learning lesson !!!!!
@jcrobin19913 жыл бұрын
Thank you for the amazing lecture series! I've learnt so much from them. Just one suggestion - it would be very nice if you can point us to some simple code example that applies those fine-tuning techniques mentioned in the lecture which we can play around with to have a better understanding of those concepts. Thanks again!
@gabiryoussef91944 жыл бұрын
Many thanks for your great work. please extend to other machine learning related topics.
@impulse17126 ай бұрын
What an explanation in detail,loved the way you explain things , thank you very much sir.
@dnnsdd54183 жыл бұрын
Hi! Thank you for a great video, very interesting topic! I have some questions. When you say that we could train the "pretrained CNN" using the siamese network, what is meant by that? Isn't the Siamese network made for embedding and then evaluating? Why would we need to pretrain another CNN using this, when the SNN is already doing it? Or is the "pretrained CNN" another name for the two "twin" CNN-models used in the SNN? Thanks in advance!
@alphonseinbaraj76024 жыл бұрын
Really wonderful and great explanation. Because i used to study books to learn ..but here all references available. Please will you give some tricks and steps for transfer learning techniques and few real time projects also ..My request .. Thanks
@norman9174 Жыл бұрын
I am from India , Such a amazing lecture , it blows my mind how everything works . Now iIunderstand it not something very high class ...its just a bunch of vectors on which we have to deal with .
@banalasaritha5702 жыл бұрын
amazing explanation.. bow to you👏
@stracci_56986 ай бұрын
Are the siamese networks not performing a fine-tunning? when the model weights are learned to perform the task?
@fionamukimba19563 жыл бұрын
This is a great lecture prof. Please do a review lecturer on Siamese networks, trends and areas of application. I hope my request and others in the comments will be attended to.
@lchunleo2 жыл бұрын
Can.few shot classification replace supervised classification even if there r data available?
@8eck2 жыл бұрын
What if there are 10.000 classes and we need to predict, to which class this quarrel belongs to? We will have to create a matrix of all 10.000 classes the same way as you have shown in your video?
@sheikhshafayat69842 жыл бұрын
This video is really really good, wish you make more such!
@MrSupermonkeyman343 жыл бұрын
Does anyone know what the difference in accuracy is if you train the network as a siamese network compared to training in the standard way?
@RyanMcCoppin Жыл бұрын
Dude, you are a boss teacher. Thanks for sharing.
@AbhishekSinghSambyal6 ай бұрын
Which app do you use to make presentations? How do you hide some images/arrows in the slides like an animation? Thanks.
@stewartmuchuchuti20 Жыл бұрын
Awesome. Well explained. Well simplified.
@chanramouliseshadri5122 жыл бұрын
Thank you Shusen. Great explanation
@AjinkyaGorad11 ай бұрын
Softmax associates while learning, and identifies while inference
@santanubanerjee54794 ай бұрын
What does it mean when the gradient propagates back to the CNN as well? What is changed in the CNN?
@santanubanerjee54794 ай бұрын
I think I need to relook CNN parameters!
@8eck2 жыл бұрын
Can we keep 1000 classes as mean vectors of their 1000 images? So 1000 mean vectors for 1000 classes with 1000 images per class.
@EranM2 жыл бұрын
yes you can.
@jasonwang99902 жыл бұрын
Amazing tutorials! Absolutely great job!
@t.pranav28343 жыл бұрын
Great explaination. Thanks for this series.
@nacho79533 жыл бұрын
do you have any code example?
@impactguide2 жыл бұрын
Thanks for this great lecture! I really like having a "real world" example, before going into the mathematical details. I was wondering something though... Is taking the mean of the support set vectors always justified or a good idea, or is there some possible generalization? As an example, for some bird species the males are brightly colored, while females are more plainly colored. If I have both a female and a male bird in my support set, the average of these two might be a "strange" vector which is relatively distant from either a male or female bird. I suspect using the discussed method would probably still work, as far as classifying birds goes, and of course you could work around this by having a female_bird and male_bird class in the support set, and adding the p_j values up. But on the other hand, they are stil examples of the same "thing", i.e. a bird, it's just that the "thing" in question has several forms. Is there some smart way of approaching such a problem?
Me ha sido realmente útil esta excelente y didáctica explicación, muchas gracias!!
@woddenhorse3 жыл бұрын
Amazing Playlist 🔥🔥
@amoldumrewal3 жыл бұрын
Hey, really nice series. Great work man! I have one question: Since we are now using cosine similarity the range for inputs for softmax is [-1,1]. This might hinder the max probability for the correct class as it can go upto ~89% in best case scenario. Are you aware of a way by which we can make the probabilities for a 100% sure prediction be equal to ~1.0?
@EranM2 жыл бұрын
take absolute value of cosine similarity
@essuanlive3 жыл бұрын
thank you much for a wonderful explanation
@zhalehmanbari61722 жыл бұрын
Fantastic 🌸
@williamberriosrojas5953 жыл бұрын
Great videos!! Thanks a lot :)
@felixschmid78493 жыл бұрын
Great explanations! Thanks!
@sonninh89873 жыл бұрын
Great explanation
@himalayasinghsheoran12554 жыл бұрын
Great explanation.
@serviofernandolimareina53652 жыл бұрын
Excellent!
@feidu112 жыл бұрын
Many thanks
@EranM2 жыл бұрын
Did anyone in here implemented this?
@Amir-tg9nf3 жыл бұрын
Thanks a lot
@EranM2 жыл бұрын
Sorry but the fine tuning approach just degraded my model accuracy.