Lecture 5 - GDA & Naive Bayes | Stanford CS229: Machine Learning Andrew Ng (Autumn 2018)

  Рет қаралды 294,947

Stanford Online

Stanford Online

Күн бұрын

Пікірлер: 66
@TekTakes
@TekTakes 10 ай бұрын
Learning with examples arrived from practical experience, from someone with probably more than a decade of experience, is always wonderful. Thank you for the lecture!
@tampopo_yukki
@tampopo_yukki Жыл бұрын
This lecture helped me connect the dots and lay the ground for GDA. Thanks a lot!
@tomcat1184
@tomcat1184 7 ай бұрын
nobody asked ya
@Harshtherocking
@Harshtherocking 11 күн бұрын
@@tomcat1184 chad
@debdeepsanyal9030
@debdeepsanyal9030 Жыл бұрын
57:25 is where andrew tells about his preference and way of choosing logistic regression and GLA
@dsazz801
@dsazz801 Жыл бұрын
Thank you for the amazing lecture with super high quality. Easy to understand, well explained, and beautifully organized.
@welcomethanks5192
@welcomethanks5192 Жыл бұрын
GDA start around 39:00 Naive Bayes 1:04:00
@mango-strawberry
@mango-strawberry 9 ай бұрын
thanks dawg
@A_Random_Ghost
@A_Random_Ghost Жыл бұрын
For anyone who finds this useful. A way to interprete the parameter formulas. Since phi is the probability of y being 1, the formula is the number of ones divided by the total number which is what we get as the indicator function only counts the ones. The mean of the zero class is the sum of the elements of the zero class divided by the number of elements in the class. And since the indicator function only counts those in the class, that's what the formula calculates. Same for the mean of the one class. The variance formula is just the standard variance-covariance matrix of a random vector from probability theory.
@zilaleizaldin1834
@zilaleizaldin1834 Ай бұрын
Thank you for making this lectures for us!
@kendroctopus
@kendroctopus Жыл бұрын
Thank you so much for this lecture!
@TheBanananutbutton
@TheBanananutbutton 11 ай бұрын
plot twist: there's only one student: batman
@abhinavchauhan4621
@abhinavchauhan4621 7 ай бұрын
😂😂 batman has so many doubts
@MosesMakuei-b5z
@MosesMakuei-b5z 4 ай бұрын
and Optimus prime
@MaryamSadeghi-AI
@MaryamSadeghi-AI 2 ай бұрын
😂😂😂
@mikegher879
@mikegher879 Жыл бұрын
Brilliant #Andrew Thanks
@WaseemKhan-dp2hi
@WaseemKhan-dp2hi 2 жыл бұрын
At 27:15 he writes the optimum value of phi that maximizes the likelihood function. It shows that when there is new person comes in, all we need to find the ratio of positive cases to the total training samples. If this ratio is 30%, we will say (without investigating symptoms and without calculating x) that the new person is 30% likely to be a positive case. That looks very odd that declaring a new case positive or negative, we don't consider x (features) of the new test sample. Can someone correct me if my understanding is wrong.
@abhijitpai6085
@abhijitpai6085 2 жыл бұрын
This situation relates to the Frequentist statistics vs Bayesian statistics discussion, read about "priors" to understand how both camps view computing proabilities.
@LZ-re9bm
@LZ-re9bm Жыл бұрын
Phi is the probability that he also denotes as p(y). This is by definition the probability that a new person will be positive, given no further information, so given no data. Basically, it is the best guess you have for whether a random person is positive, before you know anything about the person. It makes sense that no further information on the individual your best guess is just the ratio in the general population. He is trying to clarify this in his sentence: "What is the chance that the NEXT patient that walks into your office has a malignant tumor?". The key word is next, so this is happening in the future and you can't have any data yet on that person, but I agree it's misunderstandable. The probability that given the features the person is positive is denoted as p(y|x). He gives the way how to calculate this earlier in the lecture. Unsuprisingly, this does depend on the data x. So if you have features, you do consider them to make your prediction, which is p(y|x).
@architsharma292
@architsharma292 3 ай бұрын
for some of these algorithms, when we are trying to find the local max/min we are using Grad Descent/iterative approaches but in some cases we are setting derivative = 0 , what explains this difference
@jasperbutcher2596
@jasperbutcher2596 Жыл бұрын
Anyone know how to access the problem sets? Are they public material?
@Dimi231
@Dimi231 10 ай бұрын
i am also curious about that!!
@TheDebbuger
@TheDebbuger 5 ай бұрын
google this cs229-2018-autumn
@dunkelkron
@dunkelkron 5 күн бұрын
its behind a login i believe , not accessible
@kevinshao9148
@kevinshao9148 Жыл бұрын
great lecture! but anyone knows at 1:09:43 why we need 2^10000 parameters here? Thanks a lot!
@OK-lj5zc
@OK-lj5zc Жыл бұрын
i think its because in the multinomial model, there would be 2^10000 possible values for x, and each value has an associated probability that x equals it. Actually, since all the probabilities add to one, we only need (2^10000 )-1
@kevinshao9148
@kevinshao9148 Жыл бұрын
@@OK-lj5zc Thank you so much for your explaining Sir! It's actually a simple concept with your explanation, really appreciate it!
@praveengupta2271
@praveengupta2271 Жыл бұрын
The lecture is amazing but I want the notes or pdf, whatever discuss in this lecture
@OEDzn
@OEDzn 8 ай бұрын
cs229.stanford.edu/lectures-spring2022/main_notes.pdf
@alpaslankurt9394
@alpaslankurt9394 11 ай бұрын
Are there anyone who has the lecture notes, or Can I get lecture notes to study deeply for this course ?
@littleKingSolomon
@littleKingSolomon 10 ай бұрын
taking your personal notes while watching this will greatly aid your understanding imo.
@OEDzn
@OEDzn 8 ай бұрын
cs229.stanford.edu/lectures-spring2022/main_notes.pdf
@TheDebbuger
@TheDebbuger 5 ай бұрын
google it cs229-2018-autumn
@chirag6517
@chirag6517 3 ай бұрын
They are available online on their website
@hibamajdy9769
@hibamajdy9769 Жыл бұрын
I have a question how do you come up with a decision boundary in GDA?
@littleKingSolomon
@littleKingSolomon 10 ай бұрын
Perhaps, the boundary is described the curve obtained by plotting p(y=1| x) for varying x.
@humanity_first48
@humanity_first48 Жыл бұрын
can any one help me out with this question " why we only use standard gaussian distribution"??
@rijrya
@rijrya Жыл бұрын
he explained it before, it's because of the central limit theorem
@SCHINMAYSHARMA
@SCHINMAYSHARMA 5 ай бұрын
Optimus prime in the last bench
@Tyokok
@Tyokok Жыл бұрын
Thanks for the great lecture! One question please if I may (to anyone). Is Naive Bayes the same method as Bayesian Conditional Probability Modeling method (a special case or so) ? Or they are completely two separate modeling methods? Many Thanks in advance!
@TheBanananutbutton
@TheBanananutbutton 11 ай бұрын
25:12 might answer your question, I'm not sure
@Tyokok
@Tyokok 11 ай бұрын
@@TheBanananutbutton Really thank you for applying! And happy to discuss. No this is not. 25:12 pointed out diff between generative and discriminative (though I am not sure what those names stand for in depth). And I think I got answer vaguely that Naive Bayes and Bayesian Model are the same approach. I just need go more solid to connect them fully.
@AdityaByju
@AdityaByju 10 ай бұрын
Naive Bayes can be seen as a specific application of Bayesian modeling, where we assume conditional independence of the features given the class label. However, Bayesian modeling is a more general framework that can be applied to various types of models and data, not just classification tasks with independent features.
@danieljaszczyszczykoeczews2616
@danieljaszczyszczykoeczews2616 2 жыл бұрын
19:17 sub ZERO
@Danpage04
@Danpage04 Жыл бұрын
watching it at x2 speed 2x is a nice hack.
@Emanuel-oz1kw
@Emanuel-oz1kw 4 ай бұрын
41:09
@soumyadeepsarkar2119
@soumyadeepsarkar2119 Жыл бұрын
25:14
@soumyadeepsarkar2119
@soumyadeepsarkar2119 Жыл бұрын
1:13:41
@kyrokoracle5146
@kyrokoracle5146 Ай бұрын
RIP mathematicians...
@AyushAgarwal-YearBTechElectron
@AyushAgarwal-YearBTechElectron 2 жыл бұрын
I have 2 questions about the classroom , umm , why does he never use the last whiteboard (the most bottom one ) , and how does the camera keep moving like that while recording ?
@ragibshahriar187
@ragibshahriar187 2 жыл бұрын
glad you understood everything else
@bibinal1216
@bibinal1216 2 жыл бұрын
@@ragibshahriar187 🤣🤣🤣🤣🤣🤣
@ezepheros5028
@ezepheros5028 2 жыл бұрын
Idk about the first question but for the second question they probably have a camera person
@shivanshmishra752
@shivanshmishra752 2 жыл бұрын
He never uses the last because it is fixed and once you use two moving boards and go to use last one one of them will be hidden so you will not be able so everything written lol I am answering funny questions
@shivanshmishra752
@shivanshmishra752 2 жыл бұрын
@@ragibshahriar187 lol
@rahuramrt8051
@rahuramrt8051 Жыл бұрын
hello
@rahuramrt8051
@rahuramrt8051 Жыл бұрын
hhi
@Nett6799
@Nett6799 Жыл бұрын
Why he didn't start from crash Like WE know already these equations ☹️🙄
@anishkhatiwada2502
@anishkhatiwada2502 Жыл бұрын
we should have already developed some pre-requisite. He is teaching on stanford, and the student already has strong understanding on mathematics.
@CarbonSilicium
@CarbonSilicium Жыл бұрын
It's a graduate course in a field built entirely on math. I would say, he explains much more basics than he has to. If you don't know some stuff from the first year math, you should probably start there.
@guywithcoolid
@guywithcoolid 9 ай бұрын
48:45
@yong_sung
@yong_sung Жыл бұрын
57:25
I Sent a Subscriber to Disneyland
0:27
MrBeast
Рет қаралды 104 МЛН
Naive Bayes, Clearly Explained!!!
15:12
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
Nobel Minds 2024
52:30
Nobel Prize
Рет қаралды 584 М.
Naive Bayes classifier: A friendly approach
20:29
Serrano.Academy
Рет қаралды 147 М.
Demystifying the Higgs Boson with Leonard Susskind
1:15:08
Stanford
Рет қаралды 1 МЛН
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 809 М.
General Relativity Lecture 1
1:49:28
Stanford
Рет қаралды 4,1 МЛН
I Sent a Subscriber to Disneyland
0:27
MrBeast
Рет қаралды 104 МЛН