This makes me add him to the category of artists like Prof Gilbert Strang. I wonder how do you develop such skill of so lucidly explaining the most intricate concepts?
@clarkupdike65182 жыл бұрын
Obviously one would have to know the material inside and out but also at both a theoretical and applied level. I'm guessing he has honed his technique through years of experience with students and is able to dial into that sweet spot of just enough, but not too much, detail--while providing lots of context and interlinking of related concepts. He only goes theoretical, and judiciously at that, when it adds insight... instead of grandstanding to show off. He is truly a master at bringing students along for the ride on complex subjects.
@amoughnieh3 ай бұрын
This man answered a lot of lingering questions I had, even after reading multiple articles, papers, and watching experts on KZbin.
@sakules8 жыл бұрын
"if i went to enter the Z space, you would have never heard from me again" haha so great
@chilling000007 жыл бұрын
lecture 14,15,16 are the best SVM videos on youtube
@saiprasad831110 жыл бұрын
Very valuable addition to all ML text books, in which one can easily get drowned in the mathematics involved. He is superb to elicit the meaning of mathematics without going into the complexities of the same. Thanks for this course.
@livnatje11 жыл бұрын
An amazing lecturer. His talks are perfectly clear, insightful and interesting. Thanks for putting this online!
@alvincay1007 жыл бұрын
Just to reiterate what other commenters are saying.... simply excellent. I've found multiple sources and could not wrap my head around the kernel trick until I found these lectures. Abu-Mostafa separates the important concepts out from the mathematical details so that you can understand the important concepts at hand. It is easy to fill in the details later once you understand the important concept.
@Kristofcoddens5 жыл бұрын
By far best the explanation on kernels in SVMs I found online.
@vasjaforutube13 жыл бұрын
Professor Abu-Mostafa is such a cheerful person. His explanation is very clear, but I still have to pause the video every once in a while just to have a laugh.
@TheCpHaddock7 жыл бұрын
Sir you are one of the best professors ever! And not just in machine learning!
@ahlammallak88539 жыл бұрын
you are just amazing ^^ many thanks professor, I hope if you can add more videos for more techniques such as PCA, ICA and deep learning
@gip850710 жыл бұрын
These lectures are really great. Exceptionally clear and fun to watch. Thank you so much for this
@zuodongzhou33349 жыл бұрын
excellent lecture ! Best explanation I have ever seen.
@etiennedoumazane75563 жыл бұрын
I think you just gave me a bit of intuition of what that mysterious kernel trick is... thanks!
@jakebruce161711 жыл бұрын
This has been extremely helpful. Thanks for posting!
@brainstormingsharing13093 жыл бұрын
Absolutely well done and definitely keep it up!!! 👍👍👍👍👍
@ShaymaaKhalifa7 жыл бұрын
This professor is brilliant!
@acgalt3 жыл бұрын
Excellent lecture. Congratulations!
@JulienAmelot12 жыл бұрын
Conclusion : What happens in the Z space stays in the Z space :P
@markh14625 жыл бұрын
Lol
@kevinlin41577 жыл бұрын
Thank you Professor Yaser, it clearly explain how Kernal reduce computation.
@fengzhengkite9 жыл бұрын
You are excellent
@5up5up6 жыл бұрын
im the happiest guy in the world, i finally understood what's the kernel freaking trick.. thank you! شكراً جدا لحضرتك
@denoleyn8 жыл бұрын
Thank you for this great lecture. Everything explained veryvery clearly.
@dergarten7765 жыл бұрын
excellent demonstration of kernel methods!
@DAsiaView_4 жыл бұрын
Awesome lecture, had my interest the entire time!
@nicktgr15210 жыл бұрын
Fantastic presentation. Thank you very much.
@fierydino94024 жыл бұрын
Thank you for the precious lectures!!
@AndyLee-xq8wq Жыл бұрын
Great explanation!
@pt777809 жыл бұрын
"... terms will be dropping like flies" lol
@go2chayan7 жыл бұрын
I burst into laughs when he described positive-semidefinite matrix in terms of a "sleeping vector" and "standing vector" at 45:00
@vman04911 жыл бұрын
18:17 blew my mind. LOL'ed at 26:30. All the elements of a great lecture! Excellent!
@jandal4876 жыл бұрын
Excellent course on Introduction to ML. Thank you professor :)
@Darshanhegde12 жыл бұрын
and again at 41:42 Prof. Yaser says: "Third way to check that kernel is valid is 'who cares' for mercer's theorem :)"
@dipanjans8 жыл бұрын
Thanks a lot Prof. Yaser.
@BreatheThePureSerene8 жыл бұрын
Brilliant teacher
@sarnathk19464 ай бұрын
You are Pedhanna (Big brother) from now on! Thank you!
@emademad45 жыл бұрын
a question : he said an objective function of number of miss classification is NP hard , why ? and if it is so in soft margin SVM, the amount of violation need to be minimized and to perform this u need to check every sample whether they are violating or not, so its the same action he called np-hard. any one who knows where im wrong id be glad to hear it.
@aztmln8 жыл бұрын
very useful lecture. Thanks Prof ! Would love to hear more
@chaitanyatanwar815110 жыл бұрын
Thanks Superb Lecture..
@fatimatayeb6775 жыл бұрын
Great dude . Keep it up .
@michaelmellinger23242 жыл бұрын
@45:30 Establish that the Z-space exists even if we don’t know what it is
@319813611 жыл бұрын
Thank you so much, it's much better than the class I attended for Pattern Recognition!
@mohamedsalem98062 жыл бұрын
This is brilliant!
@Darshanhegde12 жыл бұрын
It's hilarious :) Prof. Yaser at 31:30 says: " If I had gone to Z space (Which is infinite here), you would have never heard from me again :D "
@DiegoAToala2 жыл бұрын
Great lecture! thank you
@achronicstudent16 күн бұрын
I am a rookie MSc student this is my first time learning these.. uh.. whatever these are... and everyone in the comments saying "Woah now i understand great explanation" etc etc. and I am just looking at the screen and feel dumb.
@alisiena70098 жыл бұрын
i have problem with dataSet if very small between [-1;0] and i have the approximation target between [0;1] but always the trainig performance is not go to thebest solution how can i solis problem
@hson1987 жыл бұрын
can you explain at 1:03:18 (slide 19) that why 0
@NicolaPiovesan11 жыл бұрын
24:00 "so by doing this operation you have done an inner product in a infinite dimensional space. Congratulations!" - LOL :D
@111rave6 жыл бұрын
you are a really good lecturer!!! "Okay" :D
@kennethnavarro34962 жыл бұрын
I am not sure but i am pretty sure that the equation for b at minute 36:59 is wrong. When I solved I got almost the same thing except instead of y of m i got 1/(y of m ) n the same spot.
@ajayram1986 жыл бұрын
In slide 1 at 5:04 he talks about using SVM with non linear transform. Could someone there explain the difference between h and H? (Complex h but simple H)
@sddyl12 жыл бұрын
Fabulous! Great Intuition!
@Nestorghh12 жыл бұрын
Great class! Thanks a lot!
@nishanthkanala8 жыл бұрын
Just Brilliant!!!
@明焕李6 жыл бұрын
讲的真好!
@michaelmellinger23242 жыл бұрын
@39:50 The whole idea of the kernel is that you don’t visit the Z-Space
@apeman529111 жыл бұрын
56:51 - I don't know, that still looks pretty complicated. 59:01 - Okay, that was pretty neat. 59:29 - Jaw hit the floor.
@siddharthsvnit6 жыл бұрын
1:02:30 can't slack still be zero ? as 0*0 = 0 , so condition is still satisfied
@markh14625 жыл бұрын
No, because you're also maximizing beta. So, the only reason we would ever let beta be zero is when zeta is nonzero.
@mementomori67345 жыл бұрын
Mark H I dont understand
@gcgrabodan8 жыл бұрын
If you take the derivative of the lagrangian of the soft-margin SVM, with respect to w, why does Xi (the error) drop out? It should depend on w, doesnt it? i.e. different margins will have different errors. So it seems to me like a super complicated problem... Thanks for help ;)
@Bing.W7 жыл бұрын
Different margins do have different errors, but different margins do not have different Ws. That's why Xi does not depend on W. In other words, for a same hyperplane (fixed W), you can define different allowed errors (Xi).
@0xaugustus7 жыл бұрын
Absolute genius !
@reinerwilhelms-tricarico344 Жыл бұрын
I like this a lot for his great clarity. Except this: When you get to "Then call your quadratic programming code to hand over the alpha's", you may end up with a big can of worms, because no body seems to know how to call any of the damn quadratic programming software that is available. There seem to be hundreds of codes around with usually miserable documentation. May be left with role your own. 😁
@PradiptoDas-SUNYBuffalo11 жыл бұрын
56:53 - did not see that coming - why was he proud over the equation? 59:46 - memorial service for beta! Classic!
@jiewang771310 жыл бұрын
execellent "OKs"
@jackeown5 жыл бұрын
The previous lecture is very helpful for understanding this: kzbin.info/www/bejne/m3nWdqWiha-Ki7c
@behrozkhan200012 жыл бұрын
Ok!
@roknyakhavein58334 жыл бұрын
We R in Z space.
@diegoiruretagoyenaoliveri60506 жыл бұрын
OKAY
@MohamedAtia11 жыл бұрын
ok?
@brainstormingsharing13093 жыл бұрын
👍👍👍👍👍
@nooneknown5 жыл бұрын
31:20
@petar29able3 жыл бұрын
Im to stupid for this why am i here anyway
@tianchi198910 жыл бұрын
This is almost the best explanation about kernel I find. But the tone he uses makes me really sleepy. :(