Self-training with Noisy Student improves ImageNet classification (Paper Explained)

  Рет қаралды 17,159

Yannic Kilcher

Yannic Kilcher

Күн бұрын

Пікірлер: 46
@bluel1ng
@bluel1ng 4 жыл бұрын
24:15 I think it is very important that they reject images with high entropy soft-pseudo-labels (=low model confidence) and only use the most confident images per class (>0.3 probability). Images that the model is confident about increase the generalization most since they get classified correctly and then extend the class-region through noise and augmentation, e.g. especially when previously unseen images lie at the "fringe" of the existing training set or closes to the decision boundary than other samples. Since the whole input space is always mapped to class-probabilities a region can be mapped to a different/wrong class although there has not seen much evidence there. Through new examples this space can be "conquered" by the correct class. And of course also with each correctly classified new image new augmented views can be generated which increases this effect.
@emilzakirov5173
@emilzakirov5173 4 жыл бұрын
I think the problem here is that they use softmax. If you use sigmoid, then for unconfident predictions the model would simply output zeros as class probabilities. It would alleviate any need for rejecting images
@mohamedbahabenticha8624
@mohamedbahabenticha8624 2 жыл бұрын
Your explanation is Amazing and very clear for a very interesting work! Inspiring for my work!!!
@omarsilva924
@omarsilva924 4 жыл бұрын
Wow! What a great analysis. Thank you
@AdamRaudonis
@AdamRaudonis 4 жыл бұрын
Super great explaination!!!
@sanderbos4243
@sanderbos4243 4 жыл бұрын
39:12 I'd love to see a video on minima distributions :)
@MrjbushM
@MrjbushM 4 жыл бұрын
Crystal clear explanation, thanks!!!
@alceubissoto
@alceubissoto 4 жыл бұрын
Thanks for the amazing explanation!
@kanakraj3198
@kanakraj3198 3 жыл бұрын
During the first training, "real" teacher model Efficientnet B5, was trained using augmentations, dropout, and SD, therefore the model becomes "noisy" but during inference, it was mentioned to use "clean", not noised teacher. They why we had trained with noise for the first time?
@herp_derpingson
@herp_derpingson 4 жыл бұрын
11:56 Never heard about Stochastic depth before. Interesting. . After the pandemic is over, have you considered giving speeches in conferences to gain popularity?
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Yea I don't think conferences will have me :D
@herp_derpingson
@herp_derpingson 4 жыл бұрын
@@YannicKilcher Its a numbers game. Keep swiping right.
@mehribaniasadi6027
@mehribaniasadi6027 3 жыл бұрын
Thanks, great explanation. I have a question though. In minute 14:40, when the steps for Algorithm 1: NoisyStudent method are explained, it goes like this: Step 1, is to train a noised teacher, but then in step 2, for labelling the unlabelled data, they use a not noised teacher for the inference. So, I don't get why in step 1 they train a noised teacher when eventually they use a not noised teacher for the inference? I get that at the end, the final network is noised, but during the steps, they use not noised teachers for the inference, so how these noised teachers trained in the intermediate steps (iterations) are used?
@YannicKilcher
@YannicKilcher 3 жыл бұрын
it's only used via the labels it outputs.
@BanditZA
@BanditZA 3 жыл бұрын
If it’s just due to augmentation and model size why not just augment the data the teacher trains on and increase the size of the teacher model? Is there a need to introduce the “student”?
@YannicKilcher
@YannicKilcher 3 жыл бұрын
It seems like the distillation itself is important, too
@blanamaxima
@blanamaxima 4 жыл бұрын
I would not say I am surprised after the double descent paper... I would have thought someone did this already.
@karanjeswani21
@karanjeswani21 4 жыл бұрын
With a PGD attack, the model is not dead. Its still better than random. Random classification accuracy for 1000 classes would be 0.1%.
@aa-xn5hc
@aa-xn5hc 4 жыл бұрын
Great great channel....
@roohollakhorrambakht8104
@roohollakhorrambakht8104 4 жыл бұрын
Filtering the labels based on the confidence level of the mode is a good idea, but the entropy of the predicted distribution is not necessarily a good indicator of that. This is because the probability outputs of the classifier would not be calibrated and produce relative confidence (concerning the other labels). There are many papers on ANN uncertainty estimation, but I find this one from Kendall to be a good sample: arxiv.org/pdf/1703.04977.pdf
@Fortnite_king954
@Fortnite_king954 4 жыл бұрын
Amazing review. Keep going
@hafezfarazi5513
@hafezfarazi5513 4 жыл бұрын
@11:22 You explained DropConnect instead of Dropout!
@samanthaqiu3416
@samanthaqiu3416 4 жыл бұрын
@Yannic please consider making a video of RealNVP/NICE and generative flows, and what is this fetish of having tractable log likelihoods
@veedrac
@veedrac 4 жыл бұрын
This is one of those papers that makes so much sense they could tell you the method and the results might as well be implicit.
@JoaoVitor-mf8iq
@JoaoVitor-mf8iq 4 жыл бұрын
That deep-emsemble paper could be used here 38:40, for the multiple local minima that are almost the global minima
@MrjbushM
@MrjbushM 4 жыл бұрын
Cool video!!!!!!
@cameron4814
@cameron4814 4 жыл бұрын
@11:40 "depth dropout" ??? i think this paper describes this users.cecs.anu.edu.au/~sgould/papers/dicta16-depthdropout.pdf
@muzammilaziz9979
@muzammilaziz9979 4 жыл бұрын
I personally think this paper has more hacking than the actual novel contribution. It's the researcher bias that made them push the idea more and more. This seems like the hacks had more to do with getting the SOTA than the main idea of the paper.
@pranshurastogi1130
@pranshurastogi1130 4 жыл бұрын
Thanks now i have some new tricks in my sleeves
@dmitrysamoylenko6775
@dmitrysamoylenko6775 4 жыл бұрын
Basically they achieve more precise learning on smaller data. And without labels, only from teacher. Interesting
@tripzero0
@tripzero0 4 жыл бұрын
Trying this now. resnet50 -> efficientnetB2 -> efficientnetB7. Only problem is that it's difficult to increase batch as the model size increases :(.
@mkamp
@mkamp 4 жыл бұрын
Because of your GPU memory limitations? Habe you considered gradient accumulation?
@tripzero0
@tripzero0 4 жыл бұрын
@@mkamp didn't know about them until now. Thanks!
@tripzero0
@tripzero0 4 жыл бұрын
I think this method somewhat depends on having a large-ish "good" initial dataset for the first teacher. I got my resnet50 network to 0.64 recall and 0.84 precision on a mutilabel dataset. The results were still very poor. Relabeling at a 0.8 threshold, produces one or two labels per image to train students from so a lot of labels get missed from there on. The certainty of getting those few labels right increases, but I'm not sure that trade-off is worth it.
@thuyennguyenhoang9473
@thuyennguyenhoang9473 4 жыл бұрын
Top 2 classify, Top 1 is FixEfficientNet-L2
@Alex-ms1yd
@Alex-ms1yd 4 жыл бұрын
At first it sounds quite counter-intuitive that this might work. I would think of student becoming more confident of teacher mistakes.. But thinking over, maybe the idea is that using soft pseudo labels with big batch sizes we are kind of bumping student top1 closer to teacher top5. And teacher mistakes are balanced by other valid datapoints.. Paper itself gives mixed feelings, one side all those tricks distracts from main idea and its evaluation. From other side its what they need to do to beat SOTA, cause they all do this.. But they tried their best to minimize this effect by many baseline comparisons.
@shivamjalotra7919
@shivamjalotra7919 4 жыл бұрын
Great
@michamartyniak9255
@michamartyniak9255 4 жыл бұрын
Isn't it already known as Active Learning?
@arkasaha4412
@arkasaha4412 4 жыл бұрын
Active learning involves human-in-loop isn't it?
@48956l
@48956l 3 жыл бұрын
these seems insanely resource intensive lol
@impolitevegan3179
@impolitevegan3179 4 жыл бұрын
Correct me if I'm wrong, but if you would train a bigger model with the same augmented techniques on the imagenet and performed the same trick here described in the paper, then you probably wouldn't much better model than the original, right? I feel like it's unfair to have a not noised teacher and then say the student outperformed the teacher.
@YannicKilcher
@YannicKilcher 4 жыл бұрын
Maybe. It's worth a try
@impolitevegan3179
@impolitevegan3179 4 жыл бұрын
@@YannicKilcher sure, just need to get a few dozens of GPUs to train on 130m images
@mehermanoj45
@mehermanoj45 4 жыл бұрын
1st, thanks
@Ruhgtfo
@Ruhgtfo 3 жыл бұрын
M pretty sure m silent
快乐总是短暂的!😂 #搞笑夫妻 #爱美食爱生活 #搞笑达人
00:14
朱大帅and依美姐
Рет қаралды 14 МЛН
Learning To Classify Images Without Labels (Paper Explained)
45:34
Yannic Kilcher
Рет қаралды 48 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
Distilling the Knowledge in a Neural Network
19:05
Kapil Sachdeva
Рет қаралды 20 М.
Self-supervised learning and pseudo-labelling
24:25
Samuel Albanie
Рет қаралды 5 М.
What P vs NP is actually about
17:58
Polylog
Рет қаралды 135 М.