RL-1E: Value Functions
8:44
3 жыл бұрын
RL-1D: Rewards and Returns
6:43
3 жыл бұрын
BERT for pretraining Transformers
15:53
17-1: Monte Carlo Algorithms
36:01
3 жыл бұрын
3-1: Insertion Sort
17:39
4 жыл бұрын
6-1: Binary Tree Basics
6:31
4 жыл бұрын
2-3: Skip List
20:40
4 жыл бұрын
2-2: Binary Search
10:36
4 жыл бұрын
Пікірлер
@Lonely_trader
@Lonely_trader 2 күн бұрын
Great explanation
@MVaralakshmi-f9d
@MVaralakshmi-f9d 10 күн бұрын
Excellent and outstanding.... Thank you so much. Please add Time complexities for all operations
@santanubanerjee5479
@santanubanerjee5479 18 күн бұрын
What does it mean when the gradient propagates back to the CNN as well? What is changed in the CNN?
@santanubanerjee5479
@santanubanerjee5479 18 күн бұрын
I think I need to relook CNN parameters!
@x-Factor461
@x-Factor461 Ай бұрын
Thank you for this video. It's awesome
@yarasultan3433
@yarasultan3433 Ай бұрын
nice
@AbhishekSinghSambyal
@AbhishekSinghSambyal Ай бұрын
Which app do you use to make presentations? How do you hide some images/arrows in the slides like an animation? Thanks.
@stracci_5698
@stracci_5698 2 ай бұрын
Are the siamese networks not performing a fine-tunning? when the model weights are learned to perform the task?
@stracci_5698
@stracci_5698 2 ай бұрын
This was very clear, thank you!
@impulse1712
@impulse1712 2 ай бұрын
What an explanation in detail,loved the way you explain things , thank you very much sir.
@parmanandchauhan6182
@parmanandchauhan6182 2 ай бұрын
Great Explanation.Thanqu
@ronalkobi4356
@ronalkobi4356 3 ай бұрын
Wonderful explanation!👏
@haroon180
@haroon180 4 ай бұрын
This is hands down the best explanation of Siamese networks on KZbin
@DrAIScience
@DrAIScience 4 ай бұрын
How data A is trained? I mean what is the loss function? Is it only using encoder or both e/decoder?
@Hshjshshjsj72727
@Hshjshshjsj72727 4 ай бұрын
Its 2024 please stop using a potato as a microphone
@MicheleMaestrini
@MicheleMaestrini 5 ай бұрын
Thank you so much for this series of lectures and slides. I am doing a thesis on few-shot learning and this has really helped me understand the fundamentals of this algorithm.
@eugenetsiukhlov7127
@eugenetsiukhlov7127 5 ай бұрын
Absolutely gorgeous! Thank you so much!
@chawkinasrallah7269
@chawkinasrallah7269 5 ай бұрын
The class token 0 is in the embed dim, does that mean we should add a linear layer from embed to number of classes before the softmax for the classification?
@SpenceMan01
@SpenceMan01 5 ай бұрын
12:51 Just had to say that your support set image of the two hamsters aren’t hamsters. Those are guinea pigs.
@kutilkol
@kutilkol 5 ай бұрын
this is supposed to be english?
@gemini_537
@gemini_537 6 ай бұрын
I feel like autoencoder can be used for the classification task and might work better. Because autoencoder can map the input into a latent space which captures the patterns.
@supersonics9196
@supersonics9196 6 ай бұрын
please explain deletion when you have time, especially on how to memorize the pointers along the search path
@supersonics9196
@supersonics9196 6 ай бұрын
very clear explanation, professor
@topgunjinhyung
@topgunjinhyung 6 ай бұрын
nice explanation. thanks
@loading_700
@loading_700 6 ай бұрын
Best lecture about Few-shot learning! Thank you
@AiDrug
@AiDrug 6 ай бұрын
Thank you so much! Great explanation
@ai_lite
@ai_lite 6 ай бұрын
great expalation! Good for you! Don't stop giving ML guides!
@geoskyr966
@geoskyr966 6 ай бұрын
so the training set is much bigger than the support set ? and i only use the support set to help with the classification of query images ?
@vipinsou3170
@vipinsou3170 7 ай бұрын
Is there any implementation of this architecture bro??,I can't find.
@vinitsunita
@vinitsunita 7 ай бұрын
Best Explanation of skiplist
@AjinkyaGorad
@AjinkyaGorad 7 ай бұрын
Softmax associates while learning, and identifies while inference
@adityapillai3091
@adityapillai3091 7 ай бұрын
Clear, concise, and overall easy to understand for a newbie like me. Thanks!
@sapttt853
@sapttt853 8 ай бұрын
very clear nice
@diamond2869
@diamond2869 8 ай бұрын
Thank you!
@mohammedal-qudah9518
@mohammedal-qudah9518 8 ай бұрын
Thank you. I like the explanation
@stewartmuchuchuti20
@stewartmuchuchuti20 8 ай бұрын
Awesome. Well explained. Well simplified.
@arindamjain7536
@arindamjain7536 8 ай бұрын
Best Video on this topic so far!
@alex-m4x4h
@alex-m4x4h 8 ай бұрын
at 19:26 the number of weights should be m*t+1 or am i getting it wrong ? because we have c0 as well
@user-wr4yl7tx3w
@user-wr4yl7tx3w 9 ай бұрын
This is an excellent presentation
@이정민영어
@이정민영어 9 ай бұрын
이해가 잘됩니다. 감사합니다.
@drelvenkee1885
@drelvenkee1885 9 ай бұрын
The best video so far. The animation is easy to follow and the explaination is very straight forward.
@mahdiyehbasereh
@mahdiyehbasereh 10 ай бұрын
The best lecture about transformers that I've seen 🙏🏻🙏🏻🙏🏻🙏🏻🙏🏻
@joshithmurthy6209
@joshithmurthy6209 10 ай бұрын
Very good explanations thank you very much
@jaylenzhang4198
@jaylenzhang4198 10 ай бұрын
Thank you, very explicit explanation. 讲的太好了老师!感谢!
@mahdiyehbasereh
@mahdiyehbasereh 10 ай бұрын
That was great and helpful 🤌🏻
@konstantinrebrov675
@konstantinrebrov675 10 ай бұрын
Nihao for the algorithms lecture, Mr. Wang.
@fridericusrex9812
@fridericusrex9812 7 ай бұрын
Wtf? "Nihao" doesn't mean what you think it does.
@fridericusrex9812
@fridericusrex9812 7 ай бұрын
Racist pig
@thecheekychinaman6713
@thecheekychinaman6713 11 ай бұрын
The best ViT explanation available. Also key to understand this for understanding Dino and Dino V2
@sevovo
@sevovo 11 ай бұрын
CNN on images + positional info = Transformers for images
@SandaruwanFonseka
@SandaruwanFonseka 11 ай бұрын
Excellent !
@BeytullahAhmetKINDAN
@BeytullahAhmetKINDAN 11 ай бұрын
that was educational!
@Peiying-h4m
@Peiying-h4m 11 ай бұрын
Best ViT explanation ever!!!!!!