Logistic Regression Cost Function (C1W2L03)

  Рет қаралды 138,185

DeepLearningAI

DeepLearningAI

Күн бұрын

Take the Deep Learning Specialization: bit.ly/3cmtNgK
Check out all our courses: www.deeplearni...
Subscribe to The Batch, our weekly newsletter: www.deeplearni...
Follow us:
Twitter: / deeplearningai_
Facebook: / deeplearninghq
Linkedin: / deeplearningai

Пікірлер: 47
@hunter330
@hunter330 4 жыл бұрын
At 4:15, Andrew should say that we want -ylog(y_hat) "as small as possible" instead of "as big as possible". Source: His notes on Coursera.
@khaledadrani3184
@khaledadrani3184 3 жыл бұрын
+1
@mahsa5527
@mahsa5527 10 ай бұрын
No, we have a minus before log(y~) and this should be the max if we want to make the value of the loss function minimum.
@mukunthag8760
@mukunthag8760 2 жыл бұрын
He is an extremely good teacher!
@rameshmaddali6208
@rameshmaddali6208 5 жыл бұрын
Best video on loss and cost function explanation
@Morpho32
@Morpho32 4 жыл бұрын
At 4:20, he says "we want -log(y hat) to be as big as possible". That's not true, we want to be as small as possible since it's the loss function and that's why we want y_hat to be as big as possible so that actually -log(y_hat) is as small as possible.
@morancium
@morancium 3 жыл бұрын
i think you messed up with negative sign
@sharmakartikeya
@sharmakartikeya 3 жыл бұрын
At 4:15, sir mistakenly says that we need to have -log(y_hat) as big as possible. It is the opposite. After all -log(y_hat) is itself the cost function for some y_hat and y = 1. So for - log(y_hat) to be as small as possible (for y = 1) we would want log(y_hat) to be as large as possible. For this we would want y_hat to be as large as possible (Remember the graph of log x, it is a monotonically increasing curve). Remember that y_hat is nothing but the probability (our model produces) of y=1 for some weights and bias, so y_hat can never be greater than 1. So we want our y_hat to be as close to 1 as possible.
@HabibuMukhandi
@HabibuMukhandi 6 ай бұрын
I don't think he mistakenly said it. Remember loss is a bad thing so want to have a big negative loss as possible!
@zs9510
@zs9510 4 жыл бұрын
Explained very well.
@1052boon
@1052boon 3 жыл бұрын
설명 감사합니다:)
@skypickle29
@skypickle29 4 жыл бұрын
@:22 we use Wtranspose. Why transpose? The first sample is a column vector. How is W defined? Isn't it a column vector as well in which case you only need to multiply x*W , not W transpose.?
@papayaspice1155
@papayaspice1155 4 жыл бұрын
explained in a comment on the previous video
@ramazancesur6143
@ramazancesur6143 6 жыл бұрын
this course is very great which beginner level for all in neural network course
@chitralalawat8106
@chitralalawat8106 5 жыл бұрын
Hey! Looks like you're seriously understanding what the speaker is saying.. Could you please help me with this... I want you to join my group so that we all can study further ..!!
@prafulbs7216
@prafulbs7216 4 жыл бұрын
what is difference Y^(y hat) and y ?
@nikilkumarr
@nikilkumarr 4 жыл бұрын
y hat is the predicted value (as calculated from the logistic regression model) while y is the actual value (or value of the label). Pretty much, the error is the difference between what the model predicts and the actual value :)
@uqyge
@uqyge 7 жыл бұрын
great intro course
@chitralalawat8106
@chitralalawat8106 5 жыл бұрын
Hey! Looks like you're seriously understanding what the speaker is saying.. Could you please help me with this... I want you to join my group so that we all can study further ..!!
@louerleseigneur4532
@louerleseigneur4532 3 жыл бұрын
Thanks god Thanks sir
@iammakimadog
@iammakimadog 3 жыл бұрын
2:27 technically you can still use MSE because MSE is convex
@andretelfer3678
@andretelfer3678 2 жыл бұрын
I am struggling to understand this too... I hope this comes up in a later video
@e.galois4940
@e.galois4940 Жыл бұрын
Nah, squared error is not convex for sigmoid function bro
@sandipansarkar9211
@sandipansarkar9211 3 жыл бұрын
need to make notes
@prafulbs7216
@prafulbs7216 4 жыл бұрын
could someone explain dimension of weight matrix?
@papayaspice1155
@papayaspice1155 4 жыл бұрын
explained in a comment on fhet video before this one
@StarContract
@StarContract 6 жыл бұрын
A neural network is trying to figure out how neural networks work at the moment.
@doubtunites168
@doubtunites168 5 жыл бұрын
how is your loss function so far?
@chitralalawat8106
@chitralalawat8106 5 жыл бұрын
Hey! Looks like you're seriously understanding what the speaker is saying.. Could you please help me with this... I want you to join my group so that we all can study further ..!!
@abhinavkommula4588
@abhinavkommula4588 5 жыл бұрын
@@chitralalawat8106 stop trolling please
@chitralalawat8106
@chitralalawat8106 5 жыл бұрын
@@abhinavkommula4588 you mind ur own business
@abhinavkommula4588
@abhinavkommula4588 5 жыл бұрын
​@@chitralalawat8106 oKay bUdDY mInd tHe tOxIciTy yEaH?
@meirgoldenberg5638
@meirgoldenberg5638 3 жыл бұрын
At 7:54, Andrew says that the next video would show that logistic regression can be viewed as a small neural net. However, the next video does not seem to fulfill this promise.
@ashwinilalawat604
@ashwinilalawat604 5 жыл бұрын
Yes
@ashwinilalawat604
@ashwinilalawat604 5 жыл бұрын
No
@rahulkumar-xl9pt
@rahulkumar-xl9pt 6 жыл бұрын
Why does the log function need to be large when y is both 0 or 1?
@kangzheng1390
@kangzheng1390 6 жыл бұрын
rahul kumar they are two different cases.
@sebastianrodriguezcolina634
@sebastianrodriguezcolina634 6 жыл бұрын
what you want is that your cost function is large when your prediction is far from the true value
@chitralalawat8106
@chitralalawat8106 5 жыл бұрын
Hey! Looks like you're seriously understanding what the speaker is saying.. Could you please help me with this... I want you to join my group so that we all can study further ..!!
@justin119933
@justin119933 5 жыл бұрын
so as to keep the loss function as small as possible on both scenarios. (check the writing in green on the video at 5:42)
@zhilinglin4648
@zhilinglin4648 5 жыл бұрын
Hi I also have the same question and finally sort it out after reading this post. towardsdatascience.com/optimization-loss-function-under-the-hood-part-ii-d20a239cde11
@syahirdev3193
@syahirdev3193 4 жыл бұрын
my brain hurts...
@susnatodhar7197
@susnatodhar7197 5 жыл бұрын
can someone tell me what is 'e'?
@caducoelho2221
@caducoelho2221 5 жыл бұрын
e is the Euler's number.
@caducoelho2221
@caducoelho2221 5 жыл бұрын
www.mathsisfun.com/numbers/e-eulers-number.html
@ahasanhabibsajeeb1979
@ahasanhabibsajeeb1979 4 жыл бұрын
Why don’t i understand you?
Gradient Descent (C1W2L04)
11:24
DeepLearningAI
Рет қаралды 142 М.
Explanation of Logistic Regression's Cost Function (C1W2L18)
7:15
DeepLearningAI
Рет қаралды 64 М.
My Daughter's Dumplings Are Filled With Coins #funny #cute #comedy
00:18
Funny daughter's daily life
Рет қаралды 33 МЛН
Will A Basketball Boat Hold My Weight?
00:30
MrBeast
Рет қаралды 89 МЛН
More Derivative Examples (C1W2L06)
10:28
DeepLearningAI
Рет қаралды 76 М.
L8.2 Logistic Regression Loss Function
12:57
Sebastian Raschka
Рет қаралды 7 М.
Logistic Regression (C1W2L02)
6:00
DeepLearningAI
Рет қаралды 148 М.
Logistic Regression - THE MATH YOU SHOULD KNOW!
9:14
CodeEmporium
Рет қаралды 153 М.
Regularization Part 1: Ridge (L2) Regression
20:27
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
Logistic Regression with Maximum Likelihood
15:51
Endless Engineering
Рет қаралды 34 М.
Binary Classification (C1W2L01)
8:24
DeepLearningAI
Рет қаралды 213 М.
Logistic regression cost function | In depth math intuition
14:26
Practical Data Science and Machine learning
Рет қаралды 725
Log Loss or Cross-Entropy Cost Function in Logistic Regression
8:42
My Daughter's Dumplings Are Filled With Coins #funny #cute #comedy
00:18
Funny daughter's daily life
Рет қаралды 33 МЛН