Going to make a video on SVM and how it uses this kernel trick. So if you want to understand the math behind one of the most common Machine learning Algorithms, *subscribe* to keep an eye out for it ;)
@pavan46513 жыл бұрын
Few of my friends wanted to get into ML at some point, but when they realized ML is just maths they went back to web dev. I love math and your videos make me love ML even more. Keep up!
@achillesarmstrong96395 жыл бұрын
very good explanation. also in depth. Most other videos are just explaining without formula, which are too simple
@uforskammet Жыл бұрын
Amazing! Elegant explanation.
@krishnasumanthmannala9845 жыл бұрын
I think at 2:50 the suffix to y and x should be i. Thank you for the great explanation.
@abhinav30374 ай бұрын
Amazing concept. it helped me a lot to learn the algorithm from the grass root level.
@harry50945 жыл бұрын
Damn Dude!!, you really deserve a lot more views and subscriptions.Keep doing the great work.
@CodeEmporium5 жыл бұрын
Thanks for the kind words homie!
@leif10754 жыл бұрын
@@CodeEmporium But polynomial regression is an example of a nonlinear function generally right? Unless itnjust has several linear variables..
@chaosido19 Жыл бұрын
omg I watched plethora of videos and read so many articles trying to explain me what kernel method actually gains, and I finally understand not only conceptually but also down to the technical level
@CodeEmporium Жыл бұрын
Haha I made this video so long ago I thought I explained it in a real complex way. That said, super glad this was helpful
@johnfinn94954 жыл бұрын
At about 2:56 you need to explain to customers the logic here: w* is not the solution because alpha depends on w. Also, at about 4:16 you cancel K, although K is singular. At lest you need some discussions of the range and null space of the kernel.
@User-cv4ee3 жыл бұрын
I was wondering about how w* was the solution yet contained w in it. What is it supposed to be?
@ayushtankha413 Жыл бұрын
why do we get the 1/lambda term after derivative in w* ?? @2:52
@mikel52645 жыл бұрын
How to get the vector 'k' in the last slide?
@spiritmoon345710 ай бұрын
2:53 why after solve for w, you still have w in left and right parts of equation?
@msjber58705 жыл бұрын
At 3:37, isn't the Kernel matrix K of size (m,m) rather than (m,n) since we do a dot product of every observation (from X1 to Xm) with every other one, so doing a square matrix (as you mentioned yourself just before), this gives a matrix K of size m * m, and not m * n unless I missed stgh. So the last element of the first row for instance should be Phi(X1)t * Phi(Xm), and not Phi(X1)t * Phi(Xn). Correct ?
@josephchong7834 жыл бұрын
i was wondering about this too. It should be m x m or at least he shouldve stated m = n. Annotations is really annoying when it is not explained
@Rafid_Ahmed1014 ай бұрын
Dhon buzhaiso vaya shei hoise 🔥🔥
@reubenridwan2564 ай бұрын
asolei vai dhon bujhaiche. Purai faul. ektar por ekta equation dekhaiteche baal valomoto explain o korenai.
@rajkundaliya77962 жыл бұрын
Damn! Damn! Damn! Couldn't have been better. Thanks a lot! A lot! As simple as it can get!
@CodeEmporium2 жыл бұрын
Thanks a lot!
@jamesfulton69816 жыл бұрын
I think there might be a mistake in your equation for alpha_n at ~3:08. The summation shouldn't be there
@CodeEmporium6 жыл бұрын
You're right. My bad. I'll pin your comment for now (until I you/someone else points out some more mistakes). Thanks for the heads up!
@ditke715 жыл бұрын
@@CodeEmporium Under summation everything should be indexed by i not by n, and the summation is ok.
@JI77469 Жыл бұрын
I understand data scientists might want to shy away from Hilbert spaces, but this stuff is so much clearer if you just use the Finite Representer Theorem to reformulate Ridge regression as a simple regression problem involving the kernel matrix K. :) Just my opinion.
@frankbreeze98954 жыл бұрын
Dear author, how can we obtain the w* at 2:58? Do we obtain it by setting the derivatives of J(w) to zero? Could you please explain it? Thanks.
@CodeEmporium4 жыл бұрын
Yes. The idea is to find the weight vector that minimize the cost.
@frankbreeze98954 жыл бұрын
@@CodeEmporium Thank you very much for your reply.
@birdman83752 жыл бұрын
@@CodeEmporium Can you so the same but now without regularization?
@ilyaskhan.19942 жыл бұрын
What kind of math is this vector calculus ?thanks
@hrizony7847 Жыл бұрын
sorry I don’t understand the last part. In prediction, how to calculate k(x)? Say we have all training points so that we have the K, but for testing point x what does it mean? Thanks for help bro
@lewiswesley662 жыл бұрын
can someone explain the where the 1/lambda comes from in the derivative?
@darasingh89373 жыл бұрын
Thank you for your videos! I love the fact that you show equations.
@CodeEmporium3 жыл бұрын
Glad you like my style
@CodeEmporium3 жыл бұрын
Thank you :)
@T_rex-te3us Жыл бұрын
incredible explaination, thank you very much.
@CodeEmporium Жыл бұрын
Thanks so much for watching and commenting! Glad it is useful
@987654321ABC10005 жыл бұрын
This video is awesome, thanks for the lecture!
@univuniveral97134 жыл бұрын
I can't really get the difference between polynomial regression and nonlinear regression. Please can you help me with that?
@covariance54463 жыл бұрын
You could probably answer that question without even knowledge of linear algebra or regression. After all, what is relationship between a polynomial function and a non-linear function? A non-linear function is simply any function that isn't of the form y = mx + b (or y-hat = b1x1 + b2x2 + ... bnxn for multiple linear regression). A polynomial function is of the form y = [polynomial expression here]. Recall that a polynomial expression is one that only involves terms of the form cx^n + cx^(n-1) + ... cx^0. Examples include a linear function, but also quadratics, cubics, and so forth. In short, a polynomial function can be linear (as in the case of y=mx+b) or non-linear (as in the case of, say, y = 3x^2 + x + 2. In linear regression, you are fitting a line to a data. In a non-linear regression, you are fitting a curve (and it would have to be a curve, not a line since it's *non-linear*) to a set of data. BUT that curve doesn't have to be polynomial in nature (though it certainly can be). Whether that curve is defined by a polynomial function (not of order 1) OR something else is up to the circumstance! It might, for example, be an exponential function or a sinusoidal one. Recall that neither of the latter are polynomial functions because they are not of the form y = c1x^n + c2x^(n-1) + ... cnx^0. Hope that was a satisfying answer!
@ekbastu4 жыл бұрын
Man you are a champion. Thank you very much.
@CodeEmporium4 жыл бұрын
I'd love to be one someday. Thanks a ton :)
@aspergale98364 жыл бұрын
Where does the $1/\lambda$ come from in the derivative at 2:47?
@rembautimes88083 жыл бұрын
It is indeed an awesome video but viewers should have some background knowledge so that is easy to follow. What is nice is that ties in so many concepts in a single 7 min video. A good warm up video for those who have to go out and develop some code. I have resisted watching Code Emporium for a long time , now I'm a subscriber.
@CodeEmporium3 жыл бұрын
You're right. I made this video while in grad school. So it was meant to serve as a refresher to me before exams :) That's why it's a lil hard to follow. Maybe if i had my audience more in mind at the time, this video may have been more accessible
@beboaltemimiburhan13302 жыл бұрын
@@CodeEmporium ضص
@goldfishjy953 жыл бұрын
what does w* represent? thank you
@rishabtomar9837 Жыл бұрын
It would be a great help to understand this better if can you please make a video that takes a dataset as a m samples and n features and how would we calculate this K matrix and use this for transforming the features.
@Ashrafzaman374 жыл бұрын
Very nice presentation...like it.
@meloyang93264 жыл бұрын
Thank you a lot! It really soole my problems about kernal tricks as I felt extremely puzzled in our professor's lecture.
@slithermilo2 ай бұрын
can u rerecord this but say may-trix instead of mat-rix
@niteshkans10 ай бұрын
There are grand errors in the equations that you solved. But, yes the explanation is on point.
@prithviprakash11103 жыл бұрын
Great explanation.
@purvanyatyagi24944 жыл бұрын
can w use the same techniques with svm , as in svm we have to use the lagrangian to get to the dual form
@njmanikandan94083 жыл бұрын
can someone tell me what is gram matrix ?
@birdman83752 жыл бұрын
Thats fine you make predictions without phi, but in order to make predictions you need to compute w*. In your kernelized version, you still need phi transpose, in addition to K, in order to estimate w*. Can you explain that better? How to get rid of phi transpose in the kernelized version of w*?
@JI77469 Жыл бұрын
He does this in the last section ("prediction"). You really want the actual prediction function y, and he shows the formula for it just in terms of K and not with phi floating around anywhere.
@vintonchen62104 жыл бұрын
How did you solve for the optimal w* from J(w)? I'm new to matrix calculation, would be great if you can give an explanation. Thank you.
@Праведныймиротворец4 жыл бұрын
use Lagrange multiplier
@sourasekharbanerjee90185 жыл бұрын
at 6.33 how is it possible to shift "y" before the kernel matrix without transposing as compared to at 4.45
@pritamkhan41434 жыл бұрын
@CodeEmporium, its a genuine question. Please do reply.
@RichardBrautigan23 жыл бұрын
Great Video. Thank you. However, there is a mistake at 6:30, y_pred = w'*phi(x) (w was phi'*(K+lambda)^-1*y ). Hence, w' = y'*(K+lambda)^-1*phi and y_pred = y'(K+lambda)^-1*phi*phi(x). But you wrote phi'*phi(x) and it's a inner product! It is not Kernel. phi is a nxm matrix, m can be infinite (phi' * phi > mxm covariance matrix and phi*phi' > n*n Kernel matrix) We know Kernel matrix. It cannot be infinetexinfinete dimensions. It should be nxn matrix. There is also a problem with the notation you wrote at 3:32. If phi = [phi(x1)' ; phi(x2)'; ... ; phi(xn)' ]nxm then, phi(x)*phi(x)' can be Kernel matrix as phi(x)' = [phi(x1) phi(x2) ... phi(xn)] and phi(x)*phi(x)' = [phi(x1)'phi(x1) phi(x1)'phi(x2) ... phi(x1)'phi(xn); ..... ; phi(xn)'phi(x1) phi(xn)phi(x2) ... phi(xn)phi(xn)]nxn. But you wrote m*n matrix at 3:32. This cannot be possible. if m is not equal to n, then it cannot be symmetric.
@RichardBrautigan23 жыл бұрын
It is hard to explain this with plain text, sorry. In short, m is the number of dimensions and n is the number of samples. At 3:32 K is a mxn matrix. Then, K cannot be symmetric if m is not equal to n. K must be symmetric as you said. Hence, phi is nxm matrix and phi^T is a mxn matrix and phi*phi^T is a nxn matrix.
@CharlieChen-h2qАй бұрын
@@RichardBrautigan2 your result coincides with my derivation and I believe the author's been messed up with dimensionality. Thank you so much for pointing these errors out.
@MSalem77774 жыл бұрын
Thank you! Great explanation.
@CodeEmporium4 жыл бұрын
Thanks for the compliments
@mikel52645 жыл бұрын
Man, you are the best
@pratikdeoolwadikar51244 жыл бұрын
Thanks a lot, that cleared many doubts !!
@sakshamsoni18694 жыл бұрын
How is 𝜑^𝑇 𝜑 variance matrix ?
@ignasa0072 жыл бұрын
2:57 you mean \alpha_n = \frac{1}{\lambda} (y_n - w^T\phi(x_n)), without the \sum operator. Had me confused for a while.
@zoro8117 Жыл бұрын
Dude thanks a lot ❤
@st0a Жыл бұрын
But why did you write \sum_{n=1}^{N} ||w||^2 when there's no n term in that part of the ridge regression equation? Very confusing, to say the least....
@johnfinn94954 жыл бұрын
I have a few related questions. First, the regression equations are overdetermined, i.e. the number N of data points is greater than the number M of basis functions, right? And this is why we need regularization (ridge regression, lambda>0), right? If N>M and K is NXN, it has rank (at most) M so K^{-1} does not exist, but (K+lambda I)^{-1} does. That is OK, and I suppose you can let lambda go to zero to minimize the amount of regularization. But then, if you use a Gaussian radial basis function, this is infinite dimensional (M goes to infinity), and the regularization is no longer needed. Does all this seem correct?
@JI77469 Жыл бұрын
If you go to the section on prediction, you'll see that the size of M (even if M = infinity) is irrelevant, and what's required is the inversion of K + lambda I, which is "just" inverting an N x N matrix. I don't think M has anything to do with the degree of overfitting. So yes even for a Gaussian kernel (when M = infinity) you still want to regularize.
@Manuel-tf7qc5 жыл бұрын
Just to made myself clear with your development. In minute 4:13, is the symmetry of the Kernel Matrix (K = t(K)) that allows you to have [t(y)* K * alpha] instead of [t(alpha) * K * y]? where "t" is transpose.
@MrMaipeople5 жыл бұрын
Thank you so much for this excellent Video
@rishabhnandy384 жыл бұрын
can u please help me solving one problem on this topic
@JRAbduallah19863 жыл бұрын
Thanks for uploading this video, having kernel trick to get around phi phi transpose is a good solution, however, at the end we have the inverse of K+lambda I which is a big matrix. Do you have any solution for that?
@JI77469 Жыл бұрын
To my knowledge the two practical methods that exist to avoid dealing with the often huge matrix K are "Random Features" and "Nystrom Method". But in general the huge matrix K and related issues (like inversion) are really why deep learning is often used when lots of data is available.
@birdman83752 жыл бұрын
Can you make a video like this for simple linear regression without regularization?
@TheAkashkajal6 жыл бұрын
Great Video
@sathyakumarn76193 жыл бұрын
Good speed in video. But I should say perhaps it is a bit too under-detailed. Maybe, one should be able to get through if he went through all your videos. I would request for more details in derivations in future videos.
@keyangke5 жыл бұрын
I think there might be a mistake in your equation for J(alhpa) at 3:21, should be J(w*) instead.
@tekingunasar41893 жыл бұрын
What is the point of the summations in minute three if you don't even use the index variable? Why y_n and x_n and not y_i and x_i? Also, would have been better if you credited the analytics vidhya article you took this information from (Same with your support vector machine video)
@shivamsisodiya97196 жыл бұрын
Please make more videos on GAN
@saeedmakki99234 жыл бұрын
Thanks a lot!
@sammykmfmaths74686 ай бұрын
Please the video margin Is truncated 😢😢
@faisalwho4 ай бұрын
I dislike that over emphasis on the transpose, because all it is a row dot column, orvs simple for operation.
@unknown-otter5 жыл бұрын
Finally, I understood!
@brettgattinger33384 жыл бұрын
maaaaaatrix
@johndagdelen8154 жыл бұрын
Did you hire someone else to do the voice over for your video?
@danawen5553 жыл бұрын
thanks!
@rrrprogram86675 жыл бұрын
Subscribed
@Leon-pn6rb5 жыл бұрын
still didnt get it ! ughhhhhhhh
@Trubripes8 ай бұрын
Dense but informative.
@devendraalawa41734 жыл бұрын
कोष थिता nhi aayega bro
@vtrandal3 жыл бұрын
Maahtrix? It may not all be one world, but it does overlap. Matrix!
@CodeEmporium3 жыл бұрын
Math Trix
@vtrandal2 жыл бұрын
@@CodeEmporium I am glad you have a sense of humor.
@ahmad3823 Жыл бұрын
several typos for sure but good video!
@CodeEmporium Жыл бұрын
Yea. I have tried to get better about this over the years. Thanks for watching!
@a7419876 жыл бұрын
Damn this is so beatiful
@CodeEmporium6 жыл бұрын
Thanks! ;)
@amitupadhyay65114 жыл бұрын
hey, make it easy, we came here to understand it easily, not hard, damm it
@joaquingiorgi5809 Жыл бұрын
What a fuck boy way to say maatrix 😂, great video though
@redberries80396 жыл бұрын
..but do I need to know that math to apply kernalisation? Really do I?
@keyangke5 жыл бұрын
from sklearn.kernel_ridge import KernelRidge
@lglgunlock3 жыл бұрын
Wrong equations confuse people, pls correct it
@resitk72726 жыл бұрын
This is amazing but, there are the errors I'm getting implementing this into python :( Anyone one could help?
@devendraalawa41734 жыл бұрын
Aayega
@spherinder57939 ай бұрын
mattrix
@techsavy56693 жыл бұрын
i am even more confused now.
@stephensanders93193 жыл бұрын
user name does not check out.
@piyushjaininventor Жыл бұрын
perfect definition of How Not To Teach Machine Learning Concepts.
@choubro22 жыл бұрын
bro is the mahtrix contrarian
@CodeEmporium2 жыл бұрын
Mathy trix
@watsufizzi3 жыл бұрын
sooooo, nobody is going to mention the ridiculous pronunciation of the word "matrix"?
@CodeEmporium3 жыл бұрын
Mathtrix
@devendraalawa41734 жыл бұрын
Going to riding fhor fhirsht vi teck no
@zacharybaca62765 ай бұрын
mAAtrix
@sameure64865 жыл бұрын
You're pronouncing "matrix" incorrectly.
@MrCmon1135 жыл бұрын
No, you are, because of the vowel shift in the English language. A German person, for example, would pronounce it as he does.
@PS-eu6qk4 жыл бұрын
who gives a f**k. you pronounce chicago as "shikago" and chimes as chimes. there are many other stupidity filled in this language.
@kaustubhkeny11403 жыл бұрын
Bouncer.
@devendraalawa41734 жыл бұрын
Galt he
@Seff24 жыл бұрын
2 minutes in and understood absolutely nothing. waste of time
@sally19173 жыл бұрын
maybe you need some prior course
@harishr56205 жыл бұрын
You are just news reading the topic not teaching..-_-
@dariosilva855 жыл бұрын
Matrix is pronounced May-Trix.
@CodeEmporium5 жыл бұрын
I'm partly from India andthe states, so my pronunciations and metrics are all over the place. Ill be more consistent
@PS-eu6qk4 жыл бұрын
What does it matter you idiot. Why is chicago pronounced as "shikago" and chimes as chimes. English is a faulty language. No wonder why NASA scientists researched that english as a language is not suitable for artificial intelligence-nlp but Sanskrit is.
@PS-eu6qk4 жыл бұрын
@@CodeEmporium this really annoys me of these native english speaker picking up on non native english speakers.you dont need to justify how you say a matrix. Can native english speakers or any other native language speakers pronounce indian languages like Sanskrit and/or tamil properly? off course not. dont be apologetic.
@PS-eu6qk4 жыл бұрын
how do you justify magic pronounced as "may-gic" and matrix as "matrix"
@eskedarayele44302 жыл бұрын
Oh my God thank you. I always find it hard to pronounced it easily.
@ejomaumambala59844 жыл бұрын
Too many gross math mistakes (some of them pointed out in other comments). You need to read more/better material. Please don't post any more misleading and incorrect videos.
@victorzurkowski23882 жыл бұрын
Why not pronouncing "/ˈmātriks/"????
@CodeEmporium2 жыл бұрын
Cuz phonics and I never got along. I have wartime flashbacks from the 3rd grade