Learning with examples arrived from practical experience, from someone with probably more than a decade of experience, is always wonderful. Thank you for the lecture!
@tampopo_yukki Жыл бұрын
This lecture helped me connect the dots and lay the ground for GDA. Thanks a lot!
@tomcat11847 ай бұрын
nobody asked ya
@Harshtherocking11 күн бұрын
@@tomcat1184 chad
@debdeepsanyal9030 Жыл бұрын
57:25 is where andrew tells about his preference and way of choosing logistic regression and GLA
@dsazz801 Жыл бұрын
Thank you for the amazing lecture with super high quality. Easy to understand, well explained, and beautifully organized.
@welcomethanks5192 Жыл бұрын
GDA start around 39:00 Naive Bayes 1:04:00
@mango-strawberry9 ай бұрын
thanks dawg
@A_Random_Ghost Жыл бұрын
For anyone who finds this useful. A way to interprete the parameter formulas. Since phi is the probability of y being 1, the formula is the number of ones divided by the total number which is what we get as the indicator function only counts the ones. The mean of the zero class is the sum of the elements of the zero class divided by the number of elements in the class. And since the indicator function only counts those in the class, that's what the formula calculates. Same for the mean of the one class. The variance formula is just the standard variance-covariance matrix of a random vector from probability theory.
@zilaleizaldin1834Ай бұрын
Thank you for making this lectures for us!
@kendroctopus Жыл бұрын
Thank you so much for this lecture!
@TheBanananutbutton11 ай бұрын
plot twist: there's only one student: batman
@abhinavchauhan46217 ай бұрын
😂😂 batman has so many doubts
@MosesMakuei-b5z4 ай бұрын
and Optimus prime
@MaryamSadeghi-AI2 ай бұрын
😂😂😂
@mikegher879 Жыл бұрын
Brilliant #Andrew Thanks
@WaseemKhan-dp2hi2 жыл бұрын
At 27:15 he writes the optimum value of phi that maximizes the likelihood function. It shows that when there is new person comes in, all we need to find the ratio of positive cases to the total training samples. If this ratio is 30%, we will say (without investigating symptoms and without calculating x) that the new person is 30% likely to be a positive case. That looks very odd that declaring a new case positive or negative, we don't consider x (features) of the new test sample. Can someone correct me if my understanding is wrong.
@abhijitpai60852 жыл бұрын
This situation relates to the Frequentist statistics vs Bayesian statistics discussion, read about "priors" to understand how both camps view computing proabilities.
@LZ-re9bm Жыл бұрын
Phi is the probability that he also denotes as p(y). This is by definition the probability that a new person will be positive, given no further information, so given no data. Basically, it is the best guess you have for whether a random person is positive, before you know anything about the person. It makes sense that no further information on the individual your best guess is just the ratio in the general population. He is trying to clarify this in his sentence: "What is the chance that the NEXT patient that walks into your office has a malignant tumor?". The key word is next, so this is happening in the future and you can't have any data yet on that person, but I agree it's misunderstandable. The probability that given the features the person is positive is denoted as p(y|x). He gives the way how to calculate this earlier in the lecture. Unsuprisingly, this does depend on the data x. So if you have features, you do consider them to make your prediction, which is p(y|x).
@architsharma2923 ай бұрын
for some of these algorithms, when we are trying to find the local max/min we are using Grad Descent/iterative approaches but in some cases we are setting derivative = 0 , what explains this difference
@jasperbutcher2596 Жыл бұрын
Anyone know how to access the problem sets? Are they public material?
@Dimi23110 ай бұрын
i am also curious about that!!
@TheDebbuger5 ай бұрын
google this cs229-2018-autumn
@dunkelkron5 күн бұрын
its behind a login i believe , not accessible
@kevinshao9148 Жыл бұрын
great lecture! but anyone knows at 1:09:43 why we need 2^10000 parameters here? Thanks a lot!
@OK-lj5zc Жыл бұрын
i think its because in the multinomial model, there would be 2^10000 possible values for x, and each value has an associated probability that x equals it. Actually, since all the probabilities add to one, we only need (2^10000 )-1
@kevinshao9148 Жыл бұрын
@@OK-lj5zc Thank you so much for your explaining Sir! It's actually a simple concept with your explanation, really appreciate it!
@praveengupta2271 Жыл бұрын
The lecture is amazing but I want the notes or pdf, whatever discuss in this lecture
I have a question how do you come up with a decision boundary in GDA?
@littleKingSolomon10 ай бұрын
Perhaps, the boundary is described the curve obtained by plotting p(y=1| x) for varying x.
@humanity_first48 Жыл бұрын
can any one help me out with this question " why we only use standard gaussian distribution"??
@rijrya Жыл бұрын
he explained it before, it's because of the central limit theorem
@SCHINMAYSHARMA5 ай бұрын
Optimus prime in the last bench
@Tyokok Жыл бұрын
Thanks for the great lecture! One question please if I may (to anyone). Is Naive Bayes the same method as Bayesian Conditional Probability Modeling method (a special case or so) ? Or they are completely two separate modeling methods? Many Thanks in advance!
@TheBanananutbutton11 ай бұрын
25:12 might answer your question, I'm not sure
@Tyokok11 ай бұрын
@@TheBanananutbutton Really thank you for applying! And happy to discuss. No this is not. 25:12 pointed out diff between generative and discriminative (though I am not sure what those names stand for in depth). And I think I got answer vaguely that Naive Bayes and Bayesian Model are the same approach. I just need go more solid to connect them fully.
@AdityaByju10 ай бұрын
Naive Bayes can be seen as a specific application of Bayesian modeling, where we assume conditional independence of the features given the class label. However, Bayesian modeling is a more general framework that can be applied to various types of models and data, not just classification tasks with independent features.
@danieljaszczyszczykoeczews26162 жыл бұрын
19:17 sub ZERO
@Danpage04 Жыл бұрын
watching it at x2 speed 2x is a nice hack.
@Emanuel-oz1kw4 ай бұрын
41:09
@soumyadeepsarkar2119 Жыл бұрын
25:14
@soumyadeepsarkar2119 Жыл бұрын
1:13:41
@kyrokoracle5146Ай бұрын
RIP mathematicians...
@AyushAgarwal-YearBTechElectron2 жыл бұрын
I have 2 questions about the classroom , umm , why does he never use the last whiteboard (the most bottom one ) , and how does the camera keep moving like that while recording ?
@ragibshahriar1872 жыл бұрын
glad you understood everything else
@bibinal12162 жыл бұрын
@@ragibshahriar187 🤣🤣🤣🤣🤣🤣
@ezepheros50282 жыл бұрын
Idk about the first question but for the second question they probably have a camera person
@shivanshmishra7522 жыл бұрын
He never uses the last because it is fixed and once you use two moving boards and go to use last one one of them will be hidden so you will not be able so everything written lol I am answering funny questions
@shivanshmishra7522 жыл бұрын
@@ragibshahriar187 lol
@rahuramrt8051 Жыл бұрын
hello
@rahuramrt8051 Жыл бұрын
hhi
@Nett6799 Жыл бұрын
Why he didn't start from crash Like WE know already these equations ☹️🙄
@anishkhatiwada2502 Жыл бұрын
we should have already developed some pre-requisite. He is teaching on stanford, and the student already has strong understanding on mathematics.
@CarbonSilicium Жыл бұрын
It's a graduate course in a field built entirely on math. I would say, he explains much more basics than he has to. If you don't know some stuff from the first year math, you should probably start there.