5.1.1 Cost Function by Andrew Ng
6:44
3.2.1 Cost Function by Andrew Ng
11:26
3.1.3 Decision Boundary by Andrew Ng
14:50
3.1.1 Classification by Andrew Ng
8:09
2.2.1 Normal Equation by Andrew Ng
16:18
2.1.1 Multiple Features by Andrew Ng
8:23
1.2.5 Non-Photorealism by John Hart
6:10
Пікірлер
@Phi_AI
@Phi_AI Ай бұрын
This is implementation of Linear regression from scratch in NumPy only. In-depth explanation of key concepts like Cost Function and Gradient Descent kzbin.info/www/bejne/rammgquQgNRnnrc
@ShahramDerakhshandeh-sf7ld
@ShahramDerakhshandeh-sf7ld Ай бұрын
That's a great.❤
@DeltaJes-co8yu
@DeltaJes-co8yu Ай бұрын
I cannot follow the accent unfortunately and even the CC is not working
@patriots7400
@patriots7400 Ай бұрын
why you shorten your last name? I want cite you!
@aihsdiaushfiuhidnva
@aihsdiaushfiuhidnva 4 ай бұрын
This is very good! But where did you get Andrew's presentation?
@ontarioinctransport8912
@ontarioinctransport8912 5 ай бұрын
First comment enjoy
@adityavardhanjain
@adityavardhanjain 5 ай бұрын
I wonder how the complexity of the model might affect the overfitting (or underfitting?)
@khoaphamquocanh4906
@khoaphamquocanh4906 6 ай бұрын
Where can I watch this old course? Thanks
@betafishie
@betafishie 10 ай бұрын
first
@sharedhardware
@sharedhardware 10 ай бұрын
@ryanwang9699
@ryanwang9699 10 ай бұрын
Great video!
@abdelrahmane657
@abdelrahmane657 Жыл бұрын
Thank you so much. It’s been very useful. 🙏👏
@helenareveillere338
@helenareveillere338 3 жыл бұрын
Hello, Do you know if I could listen to the sound of the MANIAC somewhere on the internet ? I'm a sound editor working on an audio documentary about Mathematics and litterature, and I need to recreate the sound of the MANIAC. Thanks for your answer. Helena
@AnTran-ot3qk
@AnTran-ot3qk 3 жыл бұрын
The great video, thank you so much professor
@shivani404sheth4
@shivani404sheth4 3 жыл бұрын
so nicely explained. thank you!
@reachDeepNeuron
@reachDeepNeuron 3 жыл бұрын
instead of using superscript and subscript terms , had it been explained like start with the gist of what this algorithm does and then using math plus superscript , would help holding the audience and also motivating the audience to continue watching
@shahadp3868
@shahadp3868 3 жыл бұрын
Nicely done it sir...what about one vs one
@akashprabhakar6353
@akashprabhakar6353 4 жыл бұрын
I did not get one thing...Suppose for a classification we get the max probability..then we wd be classifying only one class separately and rest 2 as another...but how are we classifying all 3 separately??
@samueldushimimana3831
@samueldushimimana3831 4 жыл бұрын
well done Andrew
@nawabengineering4388
@nawabengineering4388 4 жыл бұрын
Well explained but Why is it called cost function? And taking 1/2 is not clear. Why and why not take square root?
@ditdit-dahdah-ditdit-dah
@ditdit-dahdah-ditdit-dah 3 ай бұрын
Cost function is also called as Loss function, both are synonyms. Division by m or 2m is interchangeable. What we are really concerned about it a model which is producing least error , not the value of the loss function directly. Cost functions can be of 3-types , among them is a regression, which again has 3 types , that is Mean Error , Mean squared error , mean absolute error . Why so many ? Cause a data set may have negative/positive errors , taking mean directly may cancel out +/- errors , and taking a square directly can be a bit troublesome if you have some outliers . In these videos , Andrew can be seen using all three in regression based . Note :it's not the only required param for concluding a model isn't good.
@nawabengineering4388
@nawabengineering4388 4 жыл бұрын
Everybody in this ML field directs to use python, you are the first one who referred to octave. Why is this so?
@jackmcgaughey4388
@jackmcgaughey4388 3 жыл бұрын
I know right
@elbrenantonio5256
@elbrenantonio5256 4 жыл бұрын
Any video for multiclass entropy and entropy. Please show calculations sample. Thanks.
@bismeetsingh352
@bismeetsingh352 4 жыл бұрын
Don't you have legal issues for copying content from coursera
@thesteve0345
@thesteve0345 4 жыл бұрын
I am pretty sure coursera copied from his content.
@GelsYT
@GelsYT 4 жыл бұрын
he is coursera
@jaideepsingh7955
@jaideepsingh7955 3 жыл бұрын
@@GelsYT hahaha true..
@randomcowgoesmoo3546
@randomcowgoesmoo3546 4 жыл бұрын
Thanks Andrew Yang, I'll definitely vote for you.
@LouisDuran
@LouisDuran 2 ай бұрын
wrong dude, the other guy wants to give you UBI. This guy wants to give you OVA
@ZombieLincoln666
@ZombieLincoln666 4 жыл бұрын
audio quality is shit
@swathys7818
@swathys7818 4 жыл бұрын
Thank you for great explanation Sir!
@sanketneema286
@sanketneema286 4 жыл бұрын
thankyou sir
@truettbloxsom8484
@truettbloxsom8484 4 жыл бұрын
Just wanted to say these videos are amazing! thank you!
@dream191919
@dream191919 4 жыл бұрын
There is an error of the example Andrew used here to demonstrate Normal equation. The X is the 4 by 5 matrix which makes the system underdetermined, which also result in the inverse of X's transpose multiplying X having no inverse. So the Normal equation cannot be calculable.
@bonipomei
@bonipomei 3 жыл бұрын
X is 4x5 and X(transpose) is 5x4. Therefore, X(transpose)*X = 5x4 * 4x5 which results in a 5x5 matrix, which has an inverse.
@IamPdub
@IamPdub 5 жыл бұрын
Great video, can you make a video on Stemming with Multiclass Classification?
@heller4196
@heller4196 5 жыл бұрын
Get this man a good Camera and Mic.
@namangupta8609
@namangupta8609 3 жыл бұрын
Sending you the bill...
@abdelrahmane657
@abdelrahmane657 Жыл бұрын
@@namangupta8609 Did you receive the bill? Or you’ll be the only youtuber watching this video
@ashwiniabhishek1504
@ashwiniabhishek1504 5 жыл бұрын
Great video
@punkntded
@punkntded 5 жыл бұрын
What does theta represent?
@ofathy1981
@ofathy1981 5 жыл бұрын
learning rate
@ByteSizedBusiness
@ByteSizedBusiness 5 жыл бұрын
@@ofathy1981 alpha is the learning rate in gradient descent .... theta is a parameter like weights in NN
@MelvinKoopmans
@MelvinKoopmans 5 жыл бұрын
@@ofathy1981 Theta does not represent the learning rate, instead it represents the parameters of the model (e.g. the weights). So P(y | x; θ) translates to English as "The probability of *y* given *x* , parameterized by *θ* ".
@amirdaneshmand9743
@amirdaneshmand9743 2 жыл бұрын
That the parameters of logistic classifier which is trained separately for each case