#10 Machine Learning Specialization [Course 1, Week 1, Lesson 3]

  Рет қаралды 45,212

DeepLearningAI

DeepLearningAI

Күн бұрын

The Machine Learning Specialization is a foundational online program created in collaboration between DeepLearning.AI and Stanford Online. This beginner-friendly program will teach you the fundamentals of machine learning and how to use these techniques to build real-world AI applications.
This Specialization is taught by Andrew Ng, an AI visionary who has led critical research at Stanford University and groundbreaking work at Google Brain, Baidu, and Landing.AI to advance the AI field.
This video is from Course 1 (Supervised Machine Learning Regression and Classification), Week 1 (Introduction to Machine Learning), Lesson 3 (Regression Model), Video 2 (Linear regression model part 2).
To learn more and access the full course videos and assignments, enroll in the Machine Learning Specialization here: bit.ly/3ERmTAq
Download the course slides: bit.ly/3AVNHwS
Check out all our courses: bit.ly/3TTc2KA
Subscribe to The Batch, our weekly newsletter: bit.ly/3TZUzju
Follow us:
Facebook: / deeplearningaihq
LinkedIn: / deeplearningai
Twitter: / deeplearningai_

Пікірлер: 18
@owurakuagyekum3871
@owurakuagyekum3871 Ай бұрын
Dr. Andrew Ng as one of the top Data Scientists in the world has one of the best teaching strategies in Data Science. Keep up the good work Doc. This lesson is worth it 🙌🏽❤️❤️
@JohnSmith-hr7fl
@JohnSmith-hr7fl 3 ай бұрын
To anyone who's watching and didn't understand the formula as much, let me explain it more, as his explanation on it was very brief: With f(x) = wx + b, we want to find out the y, or output, which is f(x). The "b" is the y-intercept, in this case, it's the y output associated with our x-input in "x". Now, you may ask, what is "w"? w simply refers to weight, which is multiplied by our x input. The Weight is a value that is multiplied by the input to help the model decide how much emphasis to put on the particular input when making predictions, the lower the value, the less emphasis, the higher the value, the more emphasis.
@enes97005
@enes97005 2 ай бұрын
thx
@ambrishchaurasia
@ambrishchaurasia 14 күн бұрын
💯
@divyakarlapudi
@divyakarlapudi 8 ай бұрын
Loved it !
@jamesrobisnon9165
@jamesrobisnon9165 Жыл бұрын
Some resources including this video say "univariate" is a regression model with one variable (input) and some say it is a regression model where we only return one predicted output. Which one is correct?
@johnm6495
@johnm6495 8 ай бұрын
Both. Let's say they're not -- that would mean we have a model that, given an input, produces multiple predicted outputs ---> In other words multiple y-hats. However, a function by definition must have one output. If we put the line or curve produced by the function on a graph, there's a simple test called the Straight Line Test that confirms whether we call that "line" a function of x. If you can find a location anywhere on the graph where you could draw a vertical line that intersects with the function's line more than once, then the function "fails" the test and is not considered a traditional function. Put another way, we call the function a "function of x" because x is the variable that determines the result of the function. The output of the function, conversely, is something determined by x. **So a univariate regression model is a model that has one input variable and one output variable.** This might sound confusing, so let's use the house-price example from the video: - each of the marks on the graph (the things shaped like a tilted "+" sign) represent a house of "x" square feet that we *know* was sold for "y" dollars - we have more houses we want to sell --> BUT while we *do* know the square feet of each of these houses to be sold... - we *don't* know how much they would sell for, since they haven't been sold yet - to predict how much the house *might* sell for, we'll use some clever math that predicts the sale-price based off of how much those aforementioned "known" houses *did* sell for - that math results in a formula that predicts the house price based on its square feet --> when plotted on a graph, the output of this formula would draw a straight line - that formula is called a function - NOW, let's say we have a 1300-square-foot house we want to sell - if we put 1300 into the formula, we'd get a certain output, a prediction of how much the house would sell for --> in this case let's just say the formula output was $100,000 - but what if the formula *also* produced an output of $200,000 - which house price is right? could we sell the house for $200,000? That's a lot more money, but if the prediction was supposed to be $100,000, then nobody will buy this overpriced house when other similar-sized houses are being sold for way less - Thusly, a function with multiple outputs wouldn't be very useful, because of this ambiguity - So again, the function *has* to have only one output, and univariate regressions must also have only one input However, KEEP THIS IN MIND: - what happens if we have sample data with the same x but different y? - in other words, what does it mean if we *know* that a 1200 square foot house was sold for $60,000, but ANOTHER 1200 square foot house was sold for $80,000 --> that's two different outputs, two different y's; what does this mean? - Nothing has changed, our function still works and will still make predictions based off of all the sample data - This is because the sample data itself is *not* a function - they're just individual entities that we use to train our function This will make more sense once the math is explained in the videos. Hope this helps.
@sadiamanzoor6791
@sadiamanzoor6791 Жыл бұрын
How can I access that?
@MuhammadAsif-nx7om
@MuhammadAsif-nx7om 10 ай бұрын
How to predict the values of w and b , what is their criteria?
@PramodShetty
@PramodShetty 10 ай бұрын
w is a weight and b is a constant. The prediction is done by the machine learning algorithm. You can also calculate the w and b value using a linear equation. but you multiple data points. In simple words y = wx + b represents a line on a graph and data points are the x and y value in that line, any random point on the line will have x and y values.
@user-aru353
@user-aru353 5 ай бұрын
anyone plz tell me how to access the optional lab
@tielessin
@tielessin 5 ай бұрын
I would wager it's available via the course. And there is a good chance that the course has a free option to enroll. You can find the link in the description. Edit: Seems like it's a payed course but it has a 7-day free trial
@sadiamanzoor6791
@sadiamanzoor6791 Жыл бұрын
where is lab code?
@bhubanmondal05
@bhubanmondal05 Жыл бұрын
u have to enroll in coursera to get access 🥲
@user-kj5vw9ri5c
@user-kj5vw9ri5c Жыл бұрын
the same question, i didn't find through the jupyter notebook
@rubayetalam8759
@rubayetalam8759 Жыл бұрын
@@user-kj5vw9ri5c IT'S ONLY FOR THE PEOPLE IN COURSERA.
@rubayetalam8759
@rubayetalam8759 Жыл бұрын
@@user-kj5vw9ri5c YOU GOTTA ENROLL IN THIS COURSE, BY YOUR COURSERA ACCOUNT. HOPE IT HELPS.
@AadityaGupta-cm6mj
@AadityaGupta-cm6mj 19 сағат бұрын
@@rubayetalam8759 is it free or paid ?
#11 Machine Learning Specialization [Course 1, Week 1, Lesson 3]
9:05
Starting a Career in Data Science (10 Thing I Wish I Knew…)
10:42
Sundas Khalid
Рет қаралды 105 М.
Indian sharing by Secret Vlog #shorts
00:13
Secret Vlog
Рет қаралды 60 МЛН
ПАРАЗИТОВ МНОГО, НО ОН ОДИН!❤❤❤
01:00
Chapitosiki
Рет қаралды 2,7 МЛН
I Need Your Help..
00:33
Stokes Twins
Рет қаралды 156 МЛН
#9 Machine Learning Specialization [Course 1, Week 1, Lesson 3]
10:27
DeepLearningAI
Рет қаралды 54 М.
Neo4j & Haystack Part 1: Knowledge Graphs for RAG
22:35
Haystack
Рет қаралды 2 М.
6. Monte Carlo Simulation
50:05
MIT OpenCourseWare
Рет қаралды 2 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 189 М.
#12 Machine Learning Specialization [Course 1, Week 1, Lesson 3]
15:47
How do I select features for Machine Learning?
13:16
Data School
Рет қаралды 174 М.
How to learn AI and get RICH in the AI revolution
7:11
Sahil & Sarra
Рет қаралды 415 М.
#13 Machine Learning Specialization [Course 1, Week 1, Lesson 3]
8:34
A comical and humorous family
0:43
昕昕一家人
Рет қаралды 29 МЛН
Как переплыть, чтобы никто НЕ ВЛЮБИЛСЯ ?
0:42
ЛогикЛаб
Рет қаралды 1,3 МЛН
И кто победил: папа или сын? 🤪🏆✌️
0:24
НЕБО - СПОРТ И РАЗВЛЕЧЕНИЯ
Рет қаралды 1,7 МЛН
Китайка и Пчелка 4 серия😂😆
0:19
KITAYKA
Рет қаралды 2,2 МЛН
🧑‍🦯🤓
0:38
Kan Andrey
Рет қаралды 1,6 МЛН