Lecture 5 - GDA & Naive Bayes | Stanford CS229: Machine Learning Andrew Ng (Autumn 2018)

  Рет қаралды 273,460

Stanford Online

Stanford Online

Күн бұрын

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: stanford.io/ai
Andrew Ng
Adjunct Professor of Computer Science
www.andrewng.org/
To follow along with the course schedule and syllabus, visit:
cs229.stanford....

Пікірлер: 60
@zZTrungZz
@zZTrungZz 7 ай бұрын
Learning with examples arrived from practical experience, from someone with probably more than a decade of experience, is always wonderful. Thank you for the lecture!
@debdeepsanyal9030
@debdeepsanyal9030 Жыл бұрын
57:25 is where andrew tells about his preference and way of choosing logistic regression and GLA
@welcomethanks5192
@welcomethanks5192 Жыл бұрын
GDA start around 39:00 Naive Bayes 1:04:00
@mango-strawberry
@mango-strawberry 5 ай бұрын
thanks dawg
@tampopo_yukki
@tampopo_yukki Жыл бұрын
This lecture helped me connect the dots and lay the ground for GDA. Thanks a lot!
@tomcat1184
@tomcat1184 4 ай бұрын
nobody asked ya
@dsazz801
@dsazz801 Жыл бұрын
Thank you for the amazing lecture with super high quality. Easy to understand, well explained, and beautifully organized.
@A_Random_Ghost
@A_Random_Ghost 9 ай бұрын
For anyone who finds this useful. A way to interprete the parameter formulas. Since phi is the probability of y being 1, the formula is the number of ones divided by the total number which is what we get as the indicator function only counts the ones. The mean of the zero class is the sum of the elements of the zero class divided by the number of elements in the class. And since the indicator function only counts those in the class, that's what the formula calculates. Same for the mean of the one class. The variance formula is just the standard variance-covariance matrix of a random vector from probability theory.
@TheBanananutbutton
@TheBanananutbutton 7 ай бұрын
plot twist: there's only one student: batman
@abhinavchauhan4621
@abhinavchauhan4621 3 ай бұрын
😂😂 batman has so many doubts
@MosesMakuei-b5z
@MosesMakuei-b5z 26 күн бұрын
and Optimus prime
@kendroctopus
@kendroctopus Жыл бұрын
Thank you so much for this lecture!
@mikegher879
@mikegher879 Жыл бұрын
Brilliant #Andrew Thanks
@jasperbutcher2596
@jasperbutcher2596 Жыл бұрын
Anyone know how to access the problem sets? Are they public material?
@Dimi231
@Dimi231 6 ай бұрын
i am also curious about that!!
@TheDebbuger
@TheDebbuger Ай бұрын
google this cs229-2018-autumn
@praveengupta2271
@praveengupta2271 9 ай бұрын
The lecture is amazing but I want the notes or pdf, whatever discuss in this lecture
@OEDzn
@OEDzn 5 ай бұрын
cs229.stanford.edu/lectures-spring2022/main_notes.pdf
@kevinshao9148
@kevinshao9148 9 ай бұрын
great lecture! but anyone knows at 1:09:43 why we need 2^10000 parameters here? Thanks a lot!
@OK-lj5zc
@OK-lj5zc 9 ай бұрын
i think its because in the multinomial model, there would be 2^10000 possible values for x, and each value has an associated probability that x equals it. Actually, since all the probabilities add to one, we only need (2^10000 )-1
@kevinshao9148
@kevinshao9148 9 ай бұрын
@@OK-lj5zc Thank you so much for your explaining Sir! It's actually a simple concept with your explanation, really appreciate it!
@Tyokok
@Tyokok 8 ай бұрын
Thanks for the great lecture! One question please if I may (to anyone). Is Naive Bayes the same method as Bayesian Conditional Probability Modeling method (a special case or so) ? Or they are completely two separate modeling methods? Many Thanks in advance!
@TheBanananutbutton
@TheBanananutbutton 7 ай бұрын
25:12 might answer your question, I'm not sure
@Tyokok
@Tyokok 7 ай бұрын
@@TheBanananutbutton Really thank you for applying! And happy to discuss. No this is not. 25:12 pointed out diff between generative and discriminative (though I am not sure what those names stand for in depth). And I think I got answer vaguely that Naive Bayes and Bayesian Model are the same approach. I just need go more solid to connect them fully.
@AdityaByju
@AdityaByju 6 ай бұрын
Naive Bayes can be seen as a specific application of Bayesian modeling, where we assume conditional independence of the features given the class label. However, Bayesian modeling is a more general framework that can be applied to various types of models and data, not just classification tasks with independent features.
@hibamajdy9769
@hibamajdy9769 9 ай бұрын
I have a question how do you come up with a decision boundary in GDA?
@littleKingSolomon
@littleKingSolomon 6 ай бұрын
Perhaps, the boundary is described the curve obtained by plotting p(y=1| x) for varying x.
@WaseemKhan-dp2hi
@WaseemKhan-dp2hi 2 жыл бұрын
At 27:15 he writes the optimum value of phi that maximizes the likelihood function. It shows that when there is new person comes in, all we need to find the ratio of positive cases to the total training samples. If this ratio is 30%, we will say (without investigating symptoms and without calculating x) that the new person is 30% likely to be a positive case. That looks very odd that declaring a new case positive or negative, we don't consider x (features) of the new test sample. Can someone correct me if my understanding is wrong.
@abhijitpai6085
@abhijitpai6085 2 жыл бұрын
This situation relates to the Frequentist statistics vs Bayesian statistics discussion, read about "priors" to understand how both camps view computing proabilities.
@LZ-re9bm
@LZ-re9bm Жыл бұрын
Phi is the probability that he also denotes as p(y). This is by definition the probability that a new person will be positive, given no further information, so given no data. Basically, it is the best guess you have for whether a random person is positive, before you know anything about the person. It makes sense that no further information on the individual your best guess is just the ratio in the general population. He is trying to clarify this in his sentence: "What is the chance that the NEXT patient that walks into your office has a malignant tumor?". The key word is next, so this is happening in the future and you can't have any data yet on that person, but I agree it's misunderstandable. The probability that given the features the person is positive is denoted as p(y|x). He gives the way how to calculate this earlier in the lecture. Unsuprisingly, this does depend on the data x. So if you have features, you do consider them to make your prediction, which is p(y|x).
@AyushAgarwal-YearBTechElectron
@AyushAgarwal-YearBTechElectron 2 жыл бұрын
I have 2 questions about the classroom , umm , why does he never use the last whiteboard (the most bottom one ) , and how does the camera keep moving like that while recording ?
@ragibshahriar187
@ragibshahriar187 2 жыл бұрын
glad you understood everything else
@bibinal1216
@bibinal1216 2 жыл бұрын
@@ragibshahriar187 🤣🤣🤣🤣🤣🤣
@ezepheros5028
@ezepheros5028 2 жыл бұрын
Idk about the first question but for the second question they probably have a camera person
@shivanshmishra752
@shivanshmishra752 2 жыл бұрын
He never uses the last because it is fixed and once you use two moving boards and go to use last one one of them will be hidden so you will not be able so everything written lol I am answering funny questions
@shivanshmishra752
@shivanshmishra752 2 жыл бұрын
@@ragibshahriar187 lol
@humanity_first48
@humanity_first48 Жыл бұрын
can any one help me out with this question " why we only use standard gaussian distribution"??
@rijrya
@rijrya Жыл бұрын
he explained it before, it's because of the central limit theorem
@alpaslankurt9394
@alpaslankurt9394 7 ай бұрын
Are there anyone who has the lecture notes, or Can I get lecture notes to study deeply for this course ?
@littleKingSolomon
@littleKingSolomon 6 ай бұрын
taking your personal notes while watching this will greatly aid your understanding imo.
@OEDzn
@OEDzn 5 ай бұрын
cs229.stanford.edu/lectures-spring2022/main_notes.pdf
@TheDebbuger
@TheDebbuger Ай бұрын
google it cs229-2018-autumn
@chirag6517
@chirag6517 Күн бұрын
They are available online on their website
@guywithcoolid
@guywithcoolid 6 ай бұрын
48:45
@Emanuel-oz1kw
@Emanuel-oz1kw Ай бұрын
41:09
@yong_sung
@yong_sung Жыл бұрын
57:25
@SCHINMAYSHARMA
@SCHINMAYSHARMA Ай бұрын
Optimus prime in the last bench
@danieljaszczyszczykoeczews2616
@danieljaszczyszczykoeczews2616 2 жыл бұрын
19:17 sub ZERO
@rahuramrt8051
@rahuramrt8051 11 ай бұрын
hello
@creativeuser9086
@creativeuser9086 Жыл бұрын
watching it at x2 speed 2x is a nice hack.
@rahuramrt8051
@rahuramrt8051 11 ай бұрын
hhi
@Nett6799
@Nett6799 Жыл бұрын
Why he didn't start from crash Like WE know already these equations ☹️🙄
@anishkhatiwada2502
@anishkhatiwada2502 Жыл бұрын
we should have already developed some pre-requisite. He is teaching on stanford, and the student already has strong understanding on mathematics.
@CarbonSilicium
@CarbonSilicium 10 ай бұрын
It's a graduate course in a field built entirely on math. I would say, he explains much more basics than he has to. If you don't know some stuff from the first year math, you should probably start there.
@soumyadeepsarkar2119
@soumyadeepsarkar2119 Жыл бұрын
25:14
@soumyadeepsarkar2119
@soumyadeepsarkar2119 Жыл бұрын
1:13:41
Touching Act of Kindness Brings Hope to the Homeless #shorts
00:18
Fabiosa Best Lifehacks
Рет қаралды 19 МЛН
He bought this so I can drive too🥹😭 #tiktok #elsarca
00:22
Elsa Arca
Рет қаралды 60 МЛН
Think Fast, Talk Smart: Communication Techniques
58:20
Stanford Graduate School of Business
Рет қаралды 40 МЛН
Bayes theorem, the geometry of changing beliefs
15:11
3Blue1Brown
Рет қаралды 4,4 МЛН
Naive Bayes, Clearly Explained!!!
15:12
StatQuest with Josh Starmer
Рет қаралды 1 МЛН
The Grandfather Of Generative Models
33:04
Artem Kirsanov
Рет қаралды 65 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 566 М.
Terence Tao at IMO 2024: AI and Mathematics
57:24
AIMO Prize
Рет қаралды 356 М.
Why Scientists Are Puzzled By This Virus
10:44
Kurzgesagt – In a Nutshell
Рет қаралды 2,4 МЛН
Touching Act of Kindness Brings Hope to the Homeless #shorts
00:18
Fabiosa Best Lifehacks
Рет қаралды 19 МЛН