Logistic Regression Indepth Maths Intuition In Hindi

  Рет қаралды 101,652

Krish Naik Hindi

Krish Naik Hindi

Күн бұрын

Пікірлер: 71
@sairabano7968
@sairabano7968 2 жыл бұрын
Thanks, Krish for making videos in Hindi. You always make things easy to understand.
@parultiwari8734
@parultiwari8734 10 ай бұрын
wish i could add more thousands of likes from my side. such a great explanation!! Thank you sir!
@kmishy
@kmishy Жыл бұрын
after 1 year, today I understood why do we have log term in cost function of logistic 22:00
@SachinKumar-zl6ku
@SachinKumar-zl6ku 2 жыл бұрын
You are doing amazing work man
@hades840
@hades840 11 ай бұрын
23:22 need to keep in mind ? because i am very bad with logs
@SachinSharma-hv3wm
@SachinSharma-hv3wm 2 жыл бұрын
Thank u so much krish sir for making videos in hindi.....aapka way of explanation bhut easy hota hai...aap complex chizo ko bhi easy bna dete ho😊😊
@UmerFarooq-zv1ky
@UmerFarooq-zv1ky 2 ай бұрын
explanation is good. But the Explanation of Nitish sir Campusx is another level.
@rajeevnayantripathi5370
@rajeevnayantripathi5370 2 ай бұрын
True
@himanshugamer1888
@himanshugamer1888 11 ай бұрын
Superb Explanation Sir ❤❤
@kiddo7094
@kiddo7094 2 жыл бұрын
maja aa gya quick and understable
@osamaosama-vh6vu
@osamaosama-vh6vu 2 жыл бұрын
Great explantion thank u dear sir be happy😍
@poizn5851
@poizn5851 2 жыл бұрын
Thank you so much this one clear my whole droughts
@manandeepsinghmatta
@manandeepsinghmatta 7 ай бұрын
bro its doubts not droughts
@chaotic_singer13
@chaotic_singer13 7 ай бұрын
The intuition is good but if you can help us with a proper derivation and also about the thought process i.e. how do we thought the way we though. It will be deep!!!
@sankhadipbera9271
@sankhadipbera9271 4 ай бұрын
quality content ❤‍🔥❤‍🔥
@aradhyakanth8409
@aradhyakanth8409 2 жыл бұрын
Thank you sir.
@arshad1781
@arshad1781 2 жыл бұрын
Nice 👍
@programmingcodewithmukesh2138
@programmingcodewithmukesh2138 Жыл бұрын
thank you sir ..so helpful for me
@singhramniwassinghsinghram7676
@singhramniwassinghsinghram7676 2 жыл бұрын
Very Helpful Video
@aradhyakanth8409
@aradhyakanth8409 2 жыл бұрын
​sir, for classification we have classifier model. so, why logistic Regression
@atulkadam6345
@atulkadam6345 2 жыл бұрын
You can use any model whichever gives you best performance wrt training and testing data
@way_to_jannah56
@way_to_jannah56 2 жыл бұрын
logestic regression is a classification problem its name is regression but actually it is classifier problm
@Ram_jagat
@Ram_jagat Жыл бұрын
@@way_to_jannah56 exactly
@prabhatupadhyay7526
@prabhatupadhyay7526 10 ай бұрын
Because in logistic regression we take sigmoid function and sigmoid return data between o to 1.
@prakharagarwal9448
@prakharagarwal9448 2 жыл бұрын
Krish, when will next community session start
@sohildoshi2655
@sohildoshi2655 2 жыл бұрын
You are legend!!
@HAVINOSH
@HAVINOSH Жыл бұрын
do we need not need to square the last equation ?
@parul15137
@parul15137 Жыл бұрын
no
@uroojmalik8454
@uroojmalik8454 Жыл бұрын
@@parul15137 why???
@beerajsaikia
@beerajsaikia 20 күн бұрын
@@uroojmalik8454 because squared error and linear sigmoid makes it non-convex
@krishj8011
@krishj8011 6 ай бұрын
great tutorial
@beingaiiitian4559
@beingaiiitian4559 8 ай бұрын
9:48
@Deeprfc12346
@Deeprfc12346 6 ай бұрын
bhai aap ak video. Text Mining and Sentiment Analysis pe bna dijiye.
@AmmarAhmedSiddiqui
@AmmarAhmedSiddiqui 11 ай бұрын
local minima se global nikalne time ap ne dundi mar di !..
@sabbiruddinakash7181
@sabbiruddinakash7181 7 ай бұрын
Thank you sir
@jannatunferdous103
@jannatunferdous103 3 ай бұрын
Pass=1 and Fail=0 till okay, but what is higher than 1? and how study hour can be less than 0? time can not be less than 0.
@sureshsingam7291
@sureshsingam7291 2 жыл бұрын
The maths for logistic regression you upload in ml playlist is completely different from hindi playlist which is correct🙆‍♂️😰😰
@tammy4994
@tammy4994 Жыл бұрын
Even I had the same confusion, @krishnaik could you please clarify?
@praneetnayak6757
@praneetnayak6757 2 жыл бұрын
IN that case what does "Maximum Likelihood" mean?
@rehmanahmadchaudhry2548
@rehmanahmadchaudhry2548 Жыл бұрын
maximum likelihood is used to simply estimate the parameters i.e. coffcients, these cofficients are further used in odds, log odds
@anamitrasingha6362
@anamitrasingha6362 Жыл бұрын
Likelihood of parameters means what's the probability of having observed the particular distribution of the dataset that you have with your right now given that I choose a particular set of parameters. What maximum likelihood estimation says is that you want to find that set of parameters that maximises the probability of having observed that distribution of the dataset that you have. You do that by taking the gradient of the likelihood/log-likelihood function with respect to the parameters and equating to 0, then solving for those parameters
@ankurgbpuat
@ankurgbpuat Жыл бұрын
Please tell us why a log function is used as a cost function(if you know at all)
@shaileshkumar-rg9tg
@shaileshkumar-rg9tg Жыл бұрын
if you know -we are all ears.
@ankurgbpuat
@ankurgbpuat Жыл бұрын
@@shaileshkumar-rg9tg Sure thing! It's done to ensure the cost function is convex.
@anamitrasingha6362
@anamitrasingha6362 Жыл бұрын
there are various flavors of ML algorithms. In logistic regression with the approach of trying to learn a discriminative function that can classify a point into a particular label => a function f:X->Y such that f(datapoint) = class_label(belonging to set Y). Since these class labels are discrete if you try to use a mean_squared_error loss function you will get an expression of the loss function which will not be a convex function, I have attempted a proof of it but it involves a bit of intricate mathematics. You can do that by showing that the hessian of the loss function is neither positive semi-definite nor negative semi-definite hence it's neither convex nor concave. When you use a loss function which is a logistic loss function you get a concave function and you basically would need to do a gradient ascent to get to the maxima of the concave function. Again these involve concepts from Convex Optimization which you may attempt to read if interested from Boyd.
@rohitjagtap5228
@rohitjagtap5228 2 жыл бұрын
thanks a lot
@GhulamMustafaSherazi
@GhulamMustafaSherazi Жыл бұрын
Sir I want to ask how z= theta0 + theta1 X1 converted to z = theta tranpose of x. waiting for your reply.
@iamravimeena
@iamravimeena Жыл бұрын
Here we have theta = [theta0, theta1] and X = [1, X1], we are transposing theta matrix to get a single value after the multiplication, which is our hypothesis. z = theta0 + theta1 * X1 is another way of writing it. But z = theta transpose * X is a general way (in case if we have multiple features(X.columns > 2)).
@kulbhushansingh1101
@kulbhushansingh1101 2 жыл бұрын
Sir you didn't teach here about loss function in logistic regression
@kartikeysingh5781
@kartikeysingh5781 11 ай бұрын
Loss function will be the same as regression just you have to replace the hypothesis function by hypothesis function for logistic regression
@kulbhushansingh1101
@kulbhushansingh1101 11 ай бұрын
@@kartikeysingh5781 thanks kartikey, I got it that was 1 year ago 😂
@shrutisingh9801
@shrutisingh9801 8 ай бұрын
y=0, y=1, y is predicted value right?
@ooofrg4492
@ooofrg4492 8 ай бұрын
Time slap?
@SorryMe-o1y
@SorryMe-o1y 4 ай бұрын
@rahuldogra7171
@rahuldogra7171 Жыл бұрын
Can you explain probabilistic approach for logistic regression?
@arshiyakhan7757
@arshiyakhan7757 Жыл бұрын
Maximum likehood
@anamitrasingha6362
@anamitrasingha6362 Жыл бұрын
Let's say you have a 2-class classification problem. You henceforth assume that your random variable Y values come from a Bernoulli distribution, with each label being either 0 or 1. This random variable Y can take on value 1 based on some probability theta(say), since probabilities of a pmf add up to one hence you can also infer that Y takes on value 0 with probability (1-theta). Now you have a dataset with you consisting of features X(n features say) and your target Y and the number of observations(samples) you have is m(say). What you want to learn is a function mapping f such that f: X -> Y. This f can be a probabilistic function as well. You define the probability of having observed a particular datapoint taking on the y value as say 1 given its features x as Pr(y_i=1|x_i). What you want now is to find the probability of having observed the values of Y across the dataset in the particular order(like y_1 takes value 1, y_2 takes value 0, these y values are what you have from the dataset) given the features X across the whole dataset(in the same order) , so basically Pr(Y|X;theta) this is read as the probability of having observed Y given that you have observed X parameterised by theta. You now define your likelihood function as L(theta) which means the likelihood of theta => the probability of having observed this Y given X. Since each of the observations/samples are independent and they are believed to have come from the same bernoulli distibution(with replacement) or in short i.i.d you say that the Pr(Y|X;theta) = product across all i (Pr(y_i = 1|x_i; theta). Why I did this is because of the independence property in probability which says the Pr(A and B) = Pr(A)*Pr(B) if event A and event B are independent. You now take a log on both sides so as to make your calculation easier and it becomes summation across all i (log(Pr(y_i=1|x_i; theta)). This is called your log-likelihood. What you now want to do is find the value of theta for which this expression is maximized which is known as maximum likelihood estimation. I should also add that this theta is assumed to be a function of w^Tx => g(w^Tx) where typically your g is a sigmoid function. So when you take the gradient you also have to substitute this function in the log-likelihood expression and then you take the gradient w.r.t w.
@pkumar0212
@pkumar0212 5 ай бұрын
👌
@shivamgondkar6183
@shivamgondkar6183 11 ай бұрын
Hello sir Can you please provide notes in pdf form? Thanks
@abhishektiwari9673
@abhishektiwari9673 2 жыл бұрын
Sir this ML playlist is enough to learn complete machine learning.
@anonymousperson7054
@anonymousperson7054 2 жыл бұрын
nope
@NehaGupta-si5yo
@NehaGupta-si5yo Жыл бұрын
what is the mean of this@@anonymousperson7054
@barwalgayatri4655
@barwalgayatri4655 2 ай бұрын
Bessssssssssssssssttttttttttttttt
@cs_soldier5292
@cs_soldier5292 Жыл бұрын
Kuch samaj mein nhi aaya sir
@as8401
@as8401 2 жыл бұрын
Sir sorry but subkuch dimag ke uppar se chala gaya
@MovieOk-p8q
@MovieOk-p8q 9 ай бұрын
Shi bola
@supriyasaxena5053
@supriyasaxena5053 3 ай бұрын
Thanks maine puri vid dekhne se pehle ye comment dekh liya
@as8401
@as8401 3 ай бұрын
@@supriyasaxena5053 मुझे खुशी है की आपका टाईम मेने बाचाया
@Toppolitics642
@Toppolitics642 2 күн бұрын
Chalo koi to compitition se bahar hua
@as8401
@as8401 2 күн бұрын
@@Toppolitics642 apki galatfaimi hai ...
@arshad1781
@arshad1781 2 жыл бұрын
Thanks 👍
Logistic Regression Practical Implementation In Python|Krish Naik|Hindi
19:47
99.9% IMPOSSIBLE
00:24
STORROR
Рет қаралды 29 МЛН
REAL or FAKE? #beatbox #tiktok
01:03
BeatboxJCOP
Рет қаралды 12 МЛН
Try this prank with your friends 😂 @karina-kola
00:18
Andrey Grechka
Рет қаралды 6 МЛН
Logistic Regression - THE MATH YOU SHOULD KNOW!
9:14
CodeEmporium
Рет қаралды 157 М.
Ridge And Lasso Regression Indepth Maths Intuition In hindi
17:47
Krish Naik Hindi
Рет қаралды 81 М.
Logistic Regression [Simply explained]
14:22
DATAtab
Рет қаралды 208 М.
Logistic regression cost function | In depth math intuition
14:26
Practical Data Science and Machine learning
Рет қаралды 801
99.9% IMPOSSIBLE
00:24
STORROR
Рет қаралды 29 МЛН