24:15 I think it is very important that they reject images with high entropy soft-pseudo-labels (=low model confidence) and only use the most confident images per class (>0.3 probability). Images that the model is confident about increase the generalization most since they get classified correctly and then extend the class-region through noise and augmentation, e.g. especially when previously unseen images lie at the "fringe" of the existing training set or closes to the decision boundary than other samples. Since the whole input space is always mapped to class-probabilities a region can be mapped to a different/wrong class although there has not seen much evidence there. Through new examples this space can be "conquered" by the correct class. And of course also with each correctly classified new image new augmented views can be generated which increases this effect.
@emilzakirov51734 жыл бұрын
I think the problem here is that they use softmax. If you use sigmoid, then for unconfident predictions the model would simply output zeros as class probabilities. It would alleviate any need for rejecting images
@mohamedbahabenticha86242 жыл бұрын
Your explanation is Amazing and very clear for a very interesting work! Inspiring for my work!!!
@omarsilva9244 жыл бұрын
Wow! What a great analysis. Thank you
@AdamRaudonis4 жыл бұрын
Super great explaination!!!
@sanderbos42434 жыл бұрын
39:12 I'd love to see a video on minima distributions :)
@MrjbushM4 жыл бұрын
Crystal clear explanation, thanks!!!
@alceubissoto4 жыл бұрын
Thanks for the amazing explanation!
@kanakraj31983 жыл бұрын
During the first training, "real" teacher model Efficientnet B5, was trained using augmentations, dropout, and SD, therefore the model becomes "noisy" but during inference, it was mentioned to use "clean", not noised teacher. They why we had trained with noise for the first time?
@herp_derpingson4 жыл бұрын
11:56 Never heard about Stochastic depth before. Interesting. . After the pandemic is over, have you considered giving speeches in conferences to gain popularity?
@YannicKilcher4 жыл бұрын
Yea I don't think conferences will have me :D
@herp_derpingson4 жыл бұрын
@@YannicKilcher Its a numbers game. Keep swiping right.
@mehribaniasadi60273 жыл бұрын
Thanks, great explanation. I have a question though. In minute 14:40, when the steps for Algorithm 1: NoisyStudent method are explained, it goes like this: Step 1, is to train a noised teacher, but then in step 2, for labelling the unlabelled data, they use a not noised teacher for the inference. So, I don't get why in step 1 they train a noised teacher when eventually they use a not noised teacher for the inference? I get that at the end, the final network is noised, but during the steps, they use not noised teachers for the inference, so how these noised teachers trained in the intermediate steps (iterations) are used?
@YannicKilcher3 жыл бұрын
it's only used via the labels it outputs.
@BanditZA3 жыл бұрын
If it’s just due to augmentation and model size why not just augment the data the teacher trains on and increase the size of the teacher model? Is there a need to introduce the “student”?
@YannicKilcher3 жыл бұрын
It seems like the distillation itself is important, too
@blanamaxima4 жыл бұрын
I would not say I am surprised after the double descent paper... I would have thought someone did this already.
@karanjeswani214 жыл бұрын
With a PGD attack, the model is not dead. Its still better than random. Random classification accuracy for 1000 classes would be 0.1%.
@aa-xn5hc4 жыл бұрын
Great great channel....
@roohollakhorrambakht81044 жыл бұрын
Filtering the labels based on the confidence level of the mode is a good idea, but the entropy of the predicted distribution is not necessarily a good indicator of that. This is because the probability outputs of the classifier would not be calibrated and produce relative confidence (concerning the other labels). There are many papers on ANN uncertainty estimation, but I find this one from Kendall to be a good sample: arxiv.org/pdf/1703.04977.pdf
@Fortnite_king9544 жыл бұрын
Amazing review. Keep going
@hafezfarazi55134 жыл бұрын
@11:22 You explained DropConnect instead of Dropout!
@samanthaqiu34164 жыл бұрын
@Yannic please consider making a video of RealNVP/NICE and generative flows, and what is this fetish of having tractable log likelihoods
@veedrac4 жыл бұрын
This is one of those papers that makes so much sense they could tell you the method and the results might as well be implicit.
@JoaoVitor-mf8iq4 жыл бұрын
That deep-emsemble paper could be used here 38:40, for the multiple local minima that are almost the global minima
@MrjbushM4 жыл бұрын
Cool video!!!!!!
@cameron48144 жыл бұрын
@11:40 "depth dropout" ??? i think this paper describes this users.cecs.anu.edu.au/~sgould/papers/dicta16-depthdropout.pdf
@muzammilaziz99794 жыл бұрын
I personally think this paper has more hacking than the actual novel contribution. It's the researcher bias that made them push the idea more and more. This seems like the hacks had more to do with getting the SOTA than the main idea of the paper.
@pranshurastogi11304 жыл бұрын
Thanks now i have some new tricks in my sleeves
@dmitrysamoylenko67754 жыл бұрын
Basically they achieve more precise learning on smaller data. And without labels, only from teacher. Interesting
@tripzero04 жыл бұрын
Trying this now. resnet50 -> efficientnetB2 -> efficientnetB7. Only problem is that it's difficult to increase batch as the model size increases :(.
@mkamp4 жыл бұрын
Because of your GPU memory limitations? Habe you considered gradient accumulation?
@tripzero04 жыл бұрын
@@mkamp didn't know about them until now. Thanks!
@tripzero04 жыл бұрын
I think this method somewhat depends on having a large-ish "good" initial dataset for the first teacher. I got my resnet50 network to 0.64 recall and 0.84 precision on a mutilabel dataset. The results were still very poor. Relabeling at a 0.8 threshold, produces one or two labels per image to train students from so a lot of labels get missed from there on. The certainty of getting those few labels right increases, but I'm not sure that trade-off is worth it.
@thuyennguyenhoang94734 жыл бұрын
Top 2 classify, Top 1 is FixEfficientNet-L2
@Alex-ms1yd4 жыл бұрын
At first it sounds quite counter-intuitive that this might work. I would think of student becoming more confident of teacher mistakes.. But thinking over, maybe the idea is that using soft pseudo labels with big batch sizes we are kind of bumping student top1 closer to teacher top5. And teacher mistakes are balanced by other valid datapoints.. Paper itself gives mixed feelings, one side all those tricks distracts from main idea and its evaluation. From other side its what they need to do to beat SOTA, cause they all do this.. But they tried their best to minimize this effect by many baseline comparisons.
@shivamjalotra79194 жыл бұрын
Great
@michamartyniak92554 жыл бұрын
Isn't it already known as Active Learning?
@arkasaha44124 жыл бұрын
Active learning involves human-in-loop isn't it?
@48956l3 жыл бұрын
these seems insanely resource intensive lol
@impolitevegan31794 жыл бұрын
Correct me if I'm wrong, but if you would train a bigger model with the same augmented techniques on the imagenet and performed the same trick here described in the paper, then you probably wouldn't much better model than the original, right? I feel like it's unfair to have a not noised teacher and then say the student outperformed the teacher.
@YannicKilcher4 жыл бұрын
Maybe. It's worth a try
@impolitevegan31794 жыл бұрын
@@YannicKilcher sure, just need to get a few dozens of GPUs to train on 130m images