Dude is a multi-millionaire and took valuable time meticulously teaching students and us. Legend.
@The_Quaalude Жыл бұрын
Bro needs to train his future employees
@vikram-aditya11 ай бұрын
yes bro. i think the more people with the knowledge, the faster the breakthroughs in the field
@clerpington_the_fifth11 ай бұрын
...and FOR FREE.
@SaidurRahman-c8w6 ай бұрын
To people like him, money is really irrelevent. These people are really top 0.00001 of people of the world, all that matters to them is how they can contribute to their respective field and help make this world a better place, money is just by-product of that passsion.
@Eric-zo8wo Жыл бұрын
0:41: 📚 This class will cover linear regression, batch and stochastic gradient descent, and the normal equations as algorithms for fitting linear regression models. 5:35: 🏠 The speaker discusses using multiple input features, such as size and number of bedrooms, to estimate the size of a house. 12:03: 📝 The hypothesis is defined as the sum of features multiplied by parameters. 18:40: 📉 Gradient descent is a method to minimize a function J of Theta by iteratively updating the values of Theta. 24:21: 📝 Gradient descent is a method used to update values in each step by calculating the partial derivative of the cost function. 30:13: 📝 The partial derivative of a term with respect to Theta J is equal to XJ, and one step of gradient descent updates Theta J 36:08: 🔑 The choice of learning rate in the algorithm affects its convergence to the global minimum. 41:45: 📊 Batch gradient descent is a method in machine learning where the entire training set is processed as one batch, but it has a disadvantage when dealing with large datasets. 47:13: 📈 Stochastic gradient descent allows for faster progress in large datasets but never fully converges. 52:23: 📝 Gradient descent is an iterative algorithm used to find the global optimum, but for linear regression, the normal equation can be used to directly jump to the global optimum. 58:59: 📝 The derivative of a matrix function with respect to the matrix itself is a matrix with the same dimensions, where each element is the derivative with respect to the corresponding element in the original matrix. 1:05:51: 📝 The speaker discusses properties of matrix traces and their derivatives. 1:13:17: 📝 The derivative of the function is equal to one-half times the derivative of Theta multiplied by the transpose of X minus the transpose of y. Recap by Tammy AI
@Lucky-vm9dv Жыл бұрын
How much we have to pay for your valuable overview on the entire class? Kudos to your efforts 👍
@sarkersaadahmed7 ай бұрын
Legend
@surajr47576 ай бұрын
@@Lucky-vm9dv Bro didn't read the last line, Recap by Tammy AI🙂
@arditsaliasi65152 ай бұрын
I cannot believe this is for free! What a time to be alive, so grateful! Thanks Stanford. Thanks dr. Andrew!
@Zesty_Soul26 күн бұрын
"What a time to be alive!" Ha ha ha. It's as hilarious as it is true 😅
@arditsaliasi651525 күн бұрын
@@Zesty_Soul 👍
@manudasmd2 жыл бұрын
Feels like sitting in stanford classroom from india ...Thanks stanford. you guys are best
@gurjotsingh3726 Жыл бұрын
for real bro, me sitting in panjab, would have never come across how the top uni profs are, this is surreal.
@hamirmahal10 ай бұрын
@@gurjotsingh3726 Sat sri akaal, ਖੁਸ਼ਕਿਸਮਤੀ
@cadetmanishtiwari8694Ай бұрын
@@gurjotsingh3726 Brother can you tell me, what are the requirements for this course? Like is this for UG students or Masters level?
@abhishekagrawal8967 ай бұрын
🎯 Key points for quick navigation: 00:03 *🏠 Introduction to Linear Regression* - Linear regression is a learning algorithm used to fit linear models. - Motivation for linear regression is explained through a supervised learning problem. - Collecting a dataset, defining notations, and building a regression model are important steps. 04:04 *📊 Designing a Learning Algorithm* - The process of supervised learning involves inputting a training set and outputting a hypothesis. - Key decisions in designing a machine learning algorithm include defining the hypothesis representation. - Understanding the workflow, dataset, and hypothesis structure is crucial in creating a successful learning algorithm. 07:19 *🏡 Multiple Features in Linear Regression* - Introducing multiple input features in linear regression models. - The importance of adding additional features like the number of bedrooms to enhance prediction accuracy. - Notation, such as defining a dummy feature for simplifying hypotheses, is explained. 13:03 *🎯 Cost Function and Parameter Optimization* - Choosing parameter values Theta to minimize the cost function J of Theta. - The squared error is used in linear regression as a measure of prediction accuracy. - Parameters are iteratively adjusted using gradient descent to find the optimal values for the model. 24:18 *🧮 Linear Regression: Gradient Descent Overview* Explanation of gradient descent in each step: - Update Theta values for each feature based on the learning rate and partial derivative of the cost function. - Learning rate determination for practical applications. - Detailed explanation of the derivative calculation for one training example. 27:11 *📈 Gradient Descent Algorithm* Derivation of the partial derivative with respect to Theta. - Calculating the partial derivative for a simple training example. - Update equation for each step of gradient descent using the calculated derivative. 33:11 *📉 Optimization: Convergence and Learning Rate* Concepts of convergence and learning rate optimization in gradient descent: - Explanation of repeat until convergence in gradient descent. - Impact of learning rate on the convergence speed and efficiency. - Practical approach to determining the optimal learning rate during implementation. 41:22 *📊 Batch Gradient Descent vs. Stochastic Gradient Descent* Comparison between batch gradient descent and stochastic gradient descent: - Description of batch gradient descent processing the entire training set in one batch. - Introduction to stochastic gradient descent processing one example at a time for parameter updates. - Illustration of how stochastic gradient descent takes a slightly noisy path towards convergence. 47:22 *🏃 Stochastic Gradient Descent vs. Batch Gradient Descent* - Stochastic gradient descent is used more in practice with very large datasets. - Mini-batch gradient descent is another algorithm that can be used with datasets that are too large for batch gradient descent. - Stochastic gradient descent is often preferred due to its faster progress in large datasets. 53:01 *📉 Derivation of the Normal Equation for Linear Regression* - The normal equation allows for the direct calculation of optimal parameter values in linear regression without an iterative algorithm. - Deriving the normal equation involves taking derivatives, setting them to zero, and solving for the optimal parameters theta. - Matrix derivatives and linear algebra notation play a crucial role in deriving the normal equation. 57:52 *🧮 Matrix Derivatives and Trace Operator* - The trace operator allows for the sum of diagonal entries in a matrix. - Properties of the trace operator include the trace of a matrix being equal to the trace of its transpose. - Derivatives with respect to matrices can be computed using the trace operator for functions mapping to real numbers. 01:12:49 *📈 Linear Regression Derivation Summary* - Deriving the gradient for the cost function J(Theta) involves taking the derivative of a quadratic function. 01:15:19 *🧮 Deriving the Normal Equations* - Setting the derivative of J(Theta) to 0 leads to the normal equations X^T X Theta = X^T y. - Using matrix derivatives helps simplify the final equation for Theta. 01:17:09 *🔍 Dealing with Non-Invertible X Matrix* - When X is non-invertible, it indicates redundant features or linear dependence. - The pseudo inverse can provide a solution in the case of linearly dependent features.
@hmm77806 ай бұрын
Thanx Bro for this!!
@rohansharma8731Ай бұрын
thanks bhai
@calvin_713 Жыл бұрын
This course saves my life! The lecturer of the ML course I'm attending rn is just going thru those crazy math derivations preassuming that all the students have mastered it all before😂
@mahihoque45986 ай бұрын
My man was treating like these top % brains had forgotten simple partial differentiation and ours just don't even care😢
@mees87114 ай бұрын
This sounds like you're taking the same course I am taking lmao
@magadize2 ай бұрын
@@mees8711where?
@Zesty_Soul26 күн бұрын
Lol. This thread is funny. 😅
@Lachipiedubinks4 ай бұрын
This professor is such kind, clear and patient... The kind of professor I wanna be
@DagmawiAbate2 жыл бұрын
I am not good at math anymore, but I think math is simple if you get the right teachers like you. Tnks.
@theLowestPointInMyLife3 ай бұрын
math seems to be the obfuscation of simple concepts with cryptic symbology
@salihalbayrak-es8ky20 сағат бұрын
@@theLowestPointInMyLife if you don't push yourself to study math, then it may seem like it. life is way easier with math, you just have to sit and learn it. what you're saying is like saying "academic language is elitist", which is an absolutely stupid thing to say and i question these kinds of people if they ever used a dictionary in their life.
@shalinisingh776Ай бұрын
Now, that I have taken some maths classes I'm able understand much more things than earlier. Also, I'm really grateful of standford for uploading these lectures on youtube for. These will surely, contribute to self-learning ML Journey. Thanks Standford!!
@cadetmanishtiwari8694Ай бұрын
can you tell me which math lectures?
@LuisFuentes98 Жыл бұрын
Hey can I point out how an amazing teacher professor Andrew is?! Also, I love how he is all excited about the lesson he is giving! It just makes me feel even more interested in the subject. Thanks for this awesome course!
@tanishsharma136 Жыл бұрын
Look at Coursera, he founded that and has many free courses.
@k-bobmakabaka4420 Жыл бұрын
when u paying 12k to your own university a year just so you can look up a course from a better school for free
@paulushimawan5196 Жыл бұрын
University cost needs to be as low cost as possible.
@_night_spring_ Жыл бұрын
while youtube have the unlimited free information and courses better than the tech university and colleges 🙂
@Call-me-Avi Жыл бұрын
Hahahahaahaha fucking hell thats what i am doing right fucking now.
@preyumkumar7404 Жыл бұрын
which uni is that...
@k-bobmakabaka4420 Жыл бұрын
@@preyumkumar7404 University of Toronto
@imad1996 Жыл бұрын
We learn, and teachers give us the information in a way that can help stimulate our learning abilities. So, we always appreciate our teachers and the facilities contributing to our development. Thank you.
@jeroenoomen8145 Жыл бұрын
Thank you to Stanford and Andrew for a wonderful series of lectures!
@토스트-d3r2 жыл бұрын
8:50 notations and symbols 13:08 how to choose theta 17:50 Gradient descent
@dens3254 Жыл бұрын
52:50 Normal equations
@jaeen766510 ай бұрын
One of the greats, a legend in AI & Machine Learning. Up there with Prof. Strang and Prof LeCun.
@i183x4 Жыл бұрын
8:50 notations and symbols 13:08 how to choose theta 17:50 Gradient descent 8:42 - 14:42 - Terminologies completion 51:00 - batch 55:00 problem 1 set 57:00 for p 0
@AshishRaj04 Жыл бұрын
notes are not available on the website ???
@anushka.narsima2 жыл бұрын
Thank you so much Dr. Andrew! It took me some time but your stepwise explanation and notes have given me a proper understanding. I'm learning this to make a presentation for my university club. We all are very grateful!
@Amit_Kumar_Trivedi2 жыл бұрын
Hi I was not able to download the notes, 404 error, from the course page in description. Other PDFs are available on the course page. Are you enrolled or where did you download the notes from?
We define a cost function based on sum of squared errors. The job is minimise this cost function with respect to the parameters. First, we look at (Batch) gradient descent. Second, we look at Stochastic gradient descent, which does not give us the exact value at which the minima is achieved, however, it is much much more effective in dealing with big data. Third, we look at the normal equation. This equation directly gives us the value at which minima is achieved! Linear regression models is one of the few models in which such an equation exist.
@xxdxma67002 жыл бұрын
I wish you sat next to me in class 😂
@rajvaghasia99422 жыл бұрын
Bro who named that equation as normal equation?
@alessandroderossi89302 жыл бұрын
@@rajvaghasia9942 the name "normal equation" is because generalizes the concept of perpendiculum (normal to something means perpendicula to something). In fact "the normal equation" represent the projection between the straight line that i draw as a starting point (in the case of LINEAR regression) and the effective sampling data .This projection has , obviously , information about the distances between the real data (sampling data) and my "starting line"...hence to find the optimal curve that fit my data i 've to find weight a bias (in this video Theta0 , Theta1 and so on) to minimize this distance. you can minimize this distance using gradient descend (too much the cost), stochastic gradient descend (doing a set of partial derivative not computing all the gradient of loss function) or using the "normal equations"...uderstand?... Here an image from wikipedia to understand better (the green line are the famous distances) en.wikipedia.org/wiki/File:Linear_least_squares_example2.svg
@JDMathematicsAndDataScience2 жыл бұрын
@@rajvaghasia9942 because we're in the matrix now bro! ha. For real though. It's about the projection matrix and the matrix representation/method of acquiring the beta coefficients.
@JDMathematicsAndDataScience2 жыл бұрын
I have been wondering why we need such an algorithm when we could just derive the least squares estimators. Have you seen any research comparing the gradient descent method of selection of parameters with the typical method of deriving the least squares estimators of the coefficient parameters?
@adeelfarooq63194 ай бұрын
Linear regression and gradient descent are introduced as the first in-depth learning algorithm. The video covers the hypothesis representation, cost function, and optimization using batch and stochastic gradient descent. The normal equation is also derived as an efficient way to fit linear models. Highlights: 00:11 Linear regression is a fundamental learning algorithm in supervised learning, used to fit models like predicting house prices. The algorithm involves defining hypotheses, parameters, and training sets to make accurate predictions. -Supervised learning involves mapping inputs to outputs, like predicting house prices based on features. Linear regression is a simple yet powerful algorithm for this task. -In linear regression, hypotheses are defined as linear functions of input features. Parameters like theta are chosen by the learning algorithm to make accurate predictions. -Introducing multiple input features in linear regression expands the model's capabilities. Parameters like theta are adjusted to fit the data accurately. 13:01 Linear regression involves choosing parameters Theta to minimize the squared difference between the hypothesis output and the actual values for training examples, achieved through a cost function J of Theta. Gradient descent is used to find the optimal Theta values for minimizing J of Theta. -Explanation of input features X and output Y in linear regression, highlighting the importance of terminology and notation in defining hypotheses. -Defining the cost function J of Theta in linear regression as the squared difference between predicted and actual values, leading to the minimization of this function to find optimal parameters. -Introduction to gradient descent as an algorithm used to minimize the cost function J of Theta and find the optimal parameters for linear regression. 18:47 Gradient descent is a method used to minimize a function by iteratively adjusting parameters. It involves taking steps in the direction of steepest descent to reach a local optimum. -Visualization of gradient descent involves finding values for Theta to minimize J of Theta, representing a 3D vector in 2D space. -Gradient descent algorithm involves updating parameters Theta using the learning rate and the partial derivative of the cost function with respect to Theta. -Determining the learning rate in practice involves starting with a common value like 0.01 and adjusting based on feature scaling for optimal function minimization. 27:26 Understanding the partial derivative in gradient descent is crucial for updating parameters efficiently. The algorithm iterates through training examples to find the global minimum of the cost function, adjusting Theta values accordingly. -Explanation of the partial derivative calculation in gradient descent and its importance in updating parameters effectively. -Expanding on the concept of gradient descent with multiple training examples and the iterative process of updating Theta values for convergence. -Illustration of how the cost function J of Theta behaves in linear regression models, showing a quadratic function without local optima, aiding in efficient parameter optimization. 36:30 Gradient descent is a key algorithm in machine learning, adjusting parameters to minimize errors. It's crucial to choose the right learning rate to efficiently converge. -Visualizing gradient descent with data points and parameter adjustments helps understand the algorithm's progression. -Batch gradient descent processes the entire dataset at once, suitable for small datasets but inefficient for large ones due to extensive computations. -The limitations of batch gradient descent in handling big data sets due to the need for repeated scans, leading to slow convergence and high computational costs. 44:58 Stochastic gradient descent updates parameters using one training example at a time, making faster progress on large datasets compared to batch gradient descent, which is slower but more stable. -Comparison of stochastic and batch gradient descent. Stochastic is faster on large datasets but doesn't converge, while batch is slower but more stable. -Mini-batch gradient descent. Using a subset of examples for faster convergence compared to one at a time in stochastic gradient descent. -Importance of decreasing learning rate. Reducing steps size in stochastic gradient descent for smoother convergence towards the global minimum. 53:39 The normal equation provides a way to find the optimal parameters in linear regression in one step, leading to the global optimum without iterative algorithms. Linear algebra notation simplifies deriving the normal equation and matrix derivatives for efficient computation. -The normal equation streamlines finding optimal parameters in linear regression, bypassing iterative methods for quick convergence to the global optimum. -Utilizing matrix derivatives and linear algebra notation simplifies the derivation process, reducing complex computations to a few lines for efficiency. -Understanding matrix functions mapping to real numbers and computing derivatives with respect to matrices enhances algorithm derivation and optimization in machine learning. 1:03:52 The video explains the concept of the trace of a matrix, its properties, and how it relates to derivatives in matrix calculus, providing examples and proofs. It also demonstrates how to express a cost function in matrix vector notation for machine learning optimization. -Properties of the trace of a matrix are discussed, including the fact that the trace of a matrix is equal to the trace of its transpose, and the cyclic permutation property of the trace of matrix products. -The video delves into the derivative properties of the trace operator in matrix calculus, showcasing how the derivative of a function involving the trace of a matrix can be computed and proven. -The concept of expressing a cost function in matrix vector notation for machine learning optimization is explained, demonstrating how to set up the design matrix and compute the cost function using matrix operations. 1:15:15 The video explains the normal equations in linear regression, where the derivative is set to 0 to find the optimum Theta value using matrix derivatives, leading to X transpose X Theta equals X transpose y. -Explanation of the normal equations in linear regression and setting the derivative to 0 to find the optimal Theta value using matrix derivatives. -Addressing the scenario of X being non-invertible due to redundant features and the solution using the pseudo inverse for linearly dependent features.
@claudiosaponaro4565 Жыл бұрын
the best professor in the world.
@Honey-sv3ek2 жыл бұрын
I really don't have a clue about this stuff, but it's interesting and I can concentrate a lot better when I listen to this lecture so I like it
@AHUMAN52 жыл бұрын
You can see his lecture on coursera about Machine learning. You will surely get what he is saying in this video.
@paulushimawan5196 Жыл бұрын
@@AHUMAN5 yes, that course is beginner-friendly. Everyone with basic high school math can take that course even without knowledge of calculus.
@ikramadjissa3702 жыл бұрын
Andrew Ng you are the best
@deepakbastola63025 ай бұрын
Dr. NG is always my best.. keep up motivating with such classes.
@vseelix957 Жыл бұрын
my machine learning lecturer is so dogshit I thought this unit was impossible to understand. Now following these on study break before midsem and this guy is the best. I'd prefer that my uni just refers to these lectures rather than making their own
39:38 we're subtracting because to minimize the cost function, the two vectors must be at 180⁰. So we get a negative from there.
@Suliyaa_Agri8 ай бұрын
Andrews Voice is Everything and that blue shirt of his
@ZDixon-io5ww2 жыл бұрын
47:00 51:00 - batch 55:00 problem 1 set 57:00 for p 0
@chandarayi5673 Жыл бұрын
I love you Sir Andrew, you inspire me a lot haha
@diegoalias29352 жыл бұрын
Really easy to understand. Thanks a lot for sharing!
@massimovarano407 Жыл бұрын
sure it is, it is high school topic, at least in Italy
@gustavoramalho9454 Жыл бұрын
@@massimovarano407 I'm pretty sure multivariate calculus is not a high-school topic in Europe
@hyperfrogw109 күн бұрын
1:05:35 for f(A) = tr(AB) this works only if A and B^T have equal dimensionality
@ambushtunes Жыл бұрын
Attending Stanford University from Nairobi, Kenya.
@PhilosophyOfWinners Жыл бұрын
Loving the lectures!!
@zzh31511 ай бұрын
"Wait, AI is just math?" "Always has been"
@gameplayspark182116 күн бұрын
😪
@uekiarawari3054 Жыл бұрын
difficult word : cost function gradient descent convex optimization hypothesis fx target j of theta = cost/loss function partial derivatives chain row global optimum batch gradient descent stochastic gradient descent mini batch gradient descent decreasing learning rate parameters oscillating iterative algorithm normal equation trace of a
@tanmaychaudhary280118 күн бұрын
27:22 where can we get the lecture notes...if anyone knows the way or simply having it then pls share...🙏🙏🙏
@markwenzy2 күн бұрын
You can just google "cs229 lecture notes" and it would be at the top. Due to CS229 being taught for multiple years, there are multiple different versions of the note, but it's all 99% the same I believe, so I suggest you just use the one on the top which is a PDF file.
@RHCIPHER2 жыл бұрын
this men is great teatcher
@李丰-w9h9 ай бұрын
at 40:10, how about if we set the initial value at a point that the gradient is a negative direction, then we should increase theta rather than decrease theta?
@anikdas5677 ай бұрын
even then we should decrease theta. Why? Reason: see the aim is to find a minima right? So if u start with a negative slope (aka gradient), u need to adjust the values of the parameters (theta) such that the slope approaches zero! (why? since the slope is zero at the minima). and if u see the graph of a quadratic equation, u will immediately understand the logic. it does not matter if u start with a pistive or negative slope. U just need to change theta so that finally ur gradient approaches zero. And for both of these cases we need to decrease the value of theta.
@raymundovazquezmusic216 Жыл бұрын
Can you update the lecture notes and assignments in the website for the course? Most of the links to the documents are broken
@stanfordonline Жыл бұрын
Hi there, thanks for your comment and feedback. The course website may be helpful to you cs229.stanford.edu/ and the notes document docs.google.com/spreadsheets/d/12ua10iRYLtxTWi05jBSAxEMM_104nTr8S4nC2cmN9BQ/edit?usp=sharing
@adi29raj Жыл бұрын
@@stanfordonline Where can I access the problem sets?
@salonisingla1665 Жыл бұрын
@@stanfordonline Please post this in the description to every video. Having this in an obscure reply to a comment will only lead to people missing it while scrolling.
@jerzytas Жыл бұрын
In the very last equatin (Normal equation 1:18:06) Transpose(X) appears on both sides of the equation, can't this be simplified by dropping transpose(T)?
@manasvi-fl6xq Жыл бұрын
no because , x is neccesarily not a square a matrix
@mortyrickerson6322 Жыл бұрын
Fantastic. Thank you deeply for sharing
@clinkclink7814 Жыл бұрын
Very clear explanations. Extra points for sounding like Stewie Griffin
@atalantinopieva7 ай бұрын
Hi, can a gentle soul explain to me why in the linear exampe of the house's price j=2 but in the visualization of the algorithm at 37:25 we have 4 iteration? should the number of iteration always be equal to the number of features?
@lyricalrohit7 ай бұрын
Number of iterations has no relation with the number of parameters to be calculated. But iteration will depend on :- Is there any possibility to decrease J(Ø) by varying the parameters (which is signified by gradient of J(Ø) with respect to parameters at a given point) if yes then do next iteration if gradient is zero then stop iteration.
@akshitabindal244417 күн бұрын
how do we deal with variables of X related to the variables of y. do we have a predefined vector or some solution for it ? like if the variables are correlated co-vary or co-depend?
@jpgunman0708 Жыл бұрын
thanks a lot 吴恩达,i learned a lot
@adekoyasamuel878817 күн бұрын
Is there a way of utilizing stochastic gradient descent then switching to batch when the training examples that haven’t been seen is less than 20% or some arbitrary number. The arbitrary number might become an hyper parameter too
@danilvinyukov20606 ай бұрын
1:17:31 Can't we just get rid of the x transverse on both left sides of the equation. As I remember from linear algebra if you have the same matrix on two sides of the equation from the same side that is redundant and can be removed. The result should be x(theta) =y => (theta) = x^(-1) y
@26d8 Жыл бұрын
The partial derivative was incomplete to me. we should take the derivative 2/2 thetha as well? is that term a constant? shouldn't we go with the product rule!
@souravsengupta1311 Жыл бұрын
cant download the course class note pls look onto ot
@truszko91 Жыл бұрын
28:51, what is x0 and x1? If we have a single feature, say # of bedrooms, how can we have x0 and x1? Wouldn't x0 be just nothing? I'm confused. Or, in other words, if my Theta0 update function relies on x0 for the update, but x0 doesn't exist, theta0 will always be the initial theta0...
@MahakYadav12 Жыл бұрын
The value of x0 is always one 1. So theta0 can rely on x0 for the update. If we have single feature then h(X) =x0*Theta0 + x1* theta1 (which is ultimately equal to theta0 + x1*theta1 as x0=1, theta0 can also be referred as intercept and theta1 as slope if you compare it with the equation of a straight line such that price of house is linear function of # of bedrooms)
@truszko91 Жыл бұрын
@@MahakYadav12 thank you!!
@polymaththesolver5721 Жыл бұрын
Thank you Stanford for this amazing resource. Pls csn i get a link to the lecture notes. Thanks
@HeisenbergHK Жыл бұрын
Where can I find the notes and other videos and any material related to this class!?
@milindbebarta22262 ай бұрын
Great lecture. I have one small doubt. Why is the last term X_transpose.Y and not y_transpose.X right at the 1:15:52 where the equation is set to zero
@srishmaulik20692 ай бұрын
is there a way of getting a hand on the assignment descriptions that were done in this course to practice?
@bhavyasharma9784 Жыл бұрын
The pdf link to the problem set says Error Not found. Can someone help Please ?
@skillato90002 жыл бұрын
1:01:06 Didn't know Darth Vader attended this lectures
@just25794 ай бұрын
can someone please help me with the Last derivative Part @1:15:42 thank you
@putinscat1208Ай бұрын
Are the equations correct? Omega j := Omega j + something. Wouldn't the new value be Omega j+1 ?
@Gatsbi10 ай бұрын
Had to study basic Calculus and Linear algebra at the same time to understand a bit, but don't get it fully yet,
@johndubchak Жыл бұрын
Andrew Ng, FTW!
@techpasya97410 ай бұрын
Is the lecture note available publicly for this? I have been going watching this playlist and I think the lecture note will be very helpful.
@KorexeroK8 ай бұрын
cs229.stanford.edu/main_notes.pdf
@HarshitSharma-YearBTechChemica Жыл бұрын
Does someone know how to get the lecture notes? They are not available on stanford's website.
@logeshwaran153711 ай бұрын
Same issue for me alsoo....
@learnfullstack2 жыл бұрын
if board is full, slide up the board, if it refuses to go up, pull it back down, erase and continue writing on it.
@atefehebrahimi49584 ай бұрын
guys what is the website the professor mentions around 27:18
@surendranmurugesan Жыл бұрын
is the explanation at 40:00 correct?
@chhaysith Жыл бұрын
Dear Dr. Andrew I saw yours other video with the cost function with linear regression by 1/2m but this video 1/2, so what is different between it?(footnote 16:00)
@treqemad Жыл бұрын
I don't really understand what you mean by 1/2m. However, from my understanding, the 1/2 is just for simplicity when taking the derivative of the cost ftn the power 2 will be multiplied to the equation and cancellyby the half.
@googgab Жыл бұрын
It should be 1/2m where m is the size of the data set. That's because we'd like to take the average sum of squared differences and not have the cost function depend on the size of the data set m kzbin.info/www/bejne/kKvIdaeJoteFpbc He explains it here at 6:30 minutes
@aman-qj5sx Жыл бұрын
@@googgab It should be ok if J depends on m since m isn't changing?
@labiditasnim623 Жыл бұрын
same question
@TheLastStand226 Жыл бұрын
The notes from the description seem to have vanished. Does anyone have them?
@JunLiLin0616 Жыл бұрын
same problem
@ILLUSTRON-l5v2 ай бұрын
The best ever ❤❤❤
@parthjoshi58922 жыл бұрын
Would anyone please share the lecture notes? On clicking on the link for the pdf notes on the course website, its showing an error that the requested URL was not found on the server. It would really be great if someone could help me with finding the class notes.
@amaia704510 ай бұрын
I think i found them here : chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/cs229.stanford.edu/main_notes.pdf
@tanmayshukla8660 Жыл бұрын
Why do we take the transpose of each row, wouldn't it be stacking columns on top of each other?
@anikdas5677 ай бұрын
i think he did a mistake when he defined the cost function at 16:17 (for "m" training examples). He just gave 1/2 as the constant, which works fine for 1 training example. But i felt a bit weird to use this for m training examples. Its like we are adding "m" quantities and dividing by 2? shouldn't it be like an average? I searched google and it showed the formuale for cost function. It showed 1/2m as the factor. which makes sense. The 2 is just a trick so that while differentiating it cancels with the power (which is 2). the 2 in the denominator can be adjusted by the learning factor (alpha). but missing the "m" in th denominator doesn't feel right. Can anyone please approve or disprove this??
@NehaGupta-xw2xg7 ай бұрын
Oh thanku so much you pointed out this I was having doubt in this
@lyricalrohit7 ай бұрын
It doesn't matter if we introduce "m" in denominator or not. For a given dataset "m" is a constant value and the way of minimising variance which you mentioned is done by minimizing numerator only. The only contribution "m" and "2" will make is reduction of step size in each iteration which will make the computation longer.
@ObaroJohnson-q8v5 ай бұрын
Formula looks like variance formulae , will be interested to know why we have that 1/2 of the variances of the lost of function. Could we just used the variance formula instead or is there a theory behind that. Thanks
@victor3btn5982 жыл бұрын
Simple and understandable
@riajulchowdhury42186 ай бұрын
Where can I get the lecture notes? I can't access the files in the website.
@chideraagbasiere7868 Жыл бұрын
May I ask, down to 7:50 what does O (teta) represent?
@wishIKnewHowToLove Жыл бұрын
it's hard, but everything thats worth doing is
@anonymous-3720 Жыл бұрын
Which book is he using? and where do we find the homework?
@spoiltbrat553 ай бұрын
why minimize square of difference instead of difference itself (diff between predicted and actual)? these are the points some one like Andrew should highlight
@atulbhoy2373 ай бұрын
1:10:10 I think that the matrix multiplication here doesn't follow the rules.
@sivavenkateshr Жыл бұрын
This is really cool. ❤
@labiditasnim623 Жыл бұрын
why in cost function he did 1/2 and not 1/2*m ?
@ayeshakhatun31142 ай бұрын
does anyone knows how do i get the resources?
@GJRahul-rr3uk23 күн бұрын
It's in description
@promariddhidas68958 ай бұрын
i wish i had access to the problem sets for this course
@akshat_senpai8 ай бұрын
May be on github...
@samsondawit Жыл бұрын
why is it that the cost function has the constant 1/2 before the summation and not 1/2m?
@ihebbibani7122 Жыл бұрын
I think it's because he is taking one learning example and not m learning examples
@samsondawit Жыл бұрын
@@ihebbibani7122 ah I see
@lyndonyang12696 ай бұрын
anyone knows where to access the homework assignments as practice?
How to study applications ? This I only upto theory is it??
@HunzaiKing-n1x Жыл бұрын
Wondering if lecture notes are also available to download from somewhere ?
@williambrace6885 Жыл бұрын
hey bro I found them: cs229.stanford.edu/lectures-spring2022/main_notes.pdf
@kag46 Жыл бұрын
@@williambrace6885thanks a lot!
@aryangawade3473Ай бұрын
Thank you so much😊@@williambrace6885
@shashankshekharjha69137 ай бұрын
okay so the superscript i, ( 1 to m) represents the number of features, right? Because here m = 2 and I don't understand why m = # training examples
@michaelgreenhut1805 ай бұрын
So I come from game development, and I'm simulating a super simplified version of the batch gradient descent in Unity3D just for fun, so I can visualize it. One thing I'm noticing is that, for each X input, the algorithm seems to gradually make h(x) match the exact Y values. So if all the Y plots look like a zig zag, the h(x) plots will just mold over that zigzag and copy it exactly instead of forming a line through it. What am I doing wrong? Am I misunderstanding theta j?
@michaelgreenhut1805 ай бұрын
Oh, wait. I think I got it -- I was making two separate thetas FOR EACH input, when I should have only been using two thetas for the entire process. For some reason I thought each X vector had to have its *own* weight for house size and for #bedrooms.
@gauravpadole1035 Жыл бұрын
can anyone pls explain what do we mean by "parameters" that is denoted by theta here?
@SteveVon7 Жыл бұрын
Parameters are TRAINABLE numbers in the model such as weights and bias's, since the prediction of the model is based on some combination of weight and bias values. So when 'parameters' of 'theta' are changed or 'trained', it means that the weights and bias's are changed or trained.
@Nevermind10008 ай бұрын
Anyone know where to get the lecture notes for the lecture
@studykibidi8 ай бұрын
just search Stanford machine learning notes, one of the first result will be of pdf from the website cs229
@Jewishisgreat Жыл бұрын
Knowledge is power
@WowLol-yw5px2 ай бұрын
Any suggestions where can i learn how to deal with matrix and derivative of them
@Srujith111Ай бұрын
there is a channel called 3 blue 1 brown, you'll the best linear algebra on the internet there
@puspjoc99757 ай бұрын
where can i get the full detail notes?? Anyone who knows this ,reply please.
@samrendranath Жыл бұрын
how to access the lecture notes:(. they have been removed from standford websites.
@Nobody231010 ай бұрын
has someone(possibly newbie like me) gone through all the videos and learnt enough to pursue an ML career or created a project? Wondering if a paid class should be taken or these free videos are enough.
@orignalbox8 ай бұрын
i also want to know have you gone through all the videos
@ahmednesartahsinchoudhury2628 Жыл бұрын
Does anyone know which textbook goes well with these lectures?