Thanks again :-) Loved the critic at the end. Also, nice from them that they report these results, lots of papers would silence it to make it seem like the method brought all the gains !
@herp_derpingson4 жыл бұрын
78% accuracy from 1 image per class. This blew my mind. What a time to be alive.
@TeoZarkopafilis4 жыл бұрын
HOLD ON TO YOUR PAPERS
@meudta2934 жыл бұрын
my brain matter is all over the floor right now hhh
@matthewtang14894 жыл бұрын
@@TeoZarkopafilis Woah! A fellow scholar here!
@shrinathdeshpande50044 жыл бұрын
definitely one of the best ways to explain a paper!! Kudos to you
@sora42222 жыл бұрын
I loved the critique at the end. Thanks.
@hihiendru4 жыл бұрын
just like UDA, emphasis on way you augment. and poor UDA got rejected. ps LOVE your breakdowns, please keep them coming.
@vishalahuja25023 жыл бұрын
Yannic, nice coverage of the paper. I have one question: at 15:05, you explain that the pseudo-label is used only if the confidence is above a certain threshold (which is also a hyperparameter). Where is the confidence coming from? It is well known that the confidence score coming out of softmax is not reliable. Can you please explain?
@jurischaber69352 жыл бұрын
Thanks again...Great teacher for us students. 🙂
@hungdungnguyen82588 ай бұрын
well explained. Thank you
@AmitKumar-ts8br3 жыл бұрын
Really nice explanation and concise...
@abhishekmaiti83324 жыл бұрын
In what order do they train the model, feed the labelled image first and then the unlabelled ones? Also, can two unlabelled images of the same class have a different pseudo label?
@YannicKilcher4 жыл бұрын
I think they do everything at the same time. I guess the labelled images can also go the unlabelled way, yes. But not the other way around, obviously :)
@tengotooborn4 жыл бұрын
Something which I find weird: isn’t a constant pseudolabel always correct? It seems that there are only positive examples in the scheme which uses the unlabeled data, and so there is nothing in the loss which forces the model to not always output the same pseudolabel for everything. Yes, one can argue that this would fail the supervised loss, but then the question becomes “how is the supervised loss weighted w.r.t. the unsupervised loss”. In any case, it seems that one would also desire to have negative examples in the unsupervised case
@christianleininger29544 жыл бұрын
Really Good Job please keep going
@ramonbullock66304 жыл бұрын
I love this content :D
@NooBiNAcTioN13343 жыл бұрын
Fantastic!
@reginaldanderson72184 жыл бұрын
Nice edit
@Manu-lc4ob4 жыл бұрын
What is the software that you are using to annotate papers Yannic ? I am using Margin notes but it does not seem as smooth