Probability for Machine Learning!

  Рет қаралды 7,265

CodeEmporium

CodeEmporium

Күн бұрын

Пікірлер: 20
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Please checkout the accompanying blogpost in the description below. For more information on each topic discussed in the video (Random Variables, Probability mass / density functions), please refer to the "Probability Theory for Machine Learning" playlist. Video Correction #1: Prices are dependent random variables that depend on number of bedrooms, age and sqft. So from 14:06 onwards, we should see the conditional distribution also depend on the X_ij terms. That said, the overall derivation should remain the same. Hope this helps!
@NicholasRenotte
@NicholasRenotte Жыл бұрын
I just binged watched the whole Probability Theory playlist this morning! Smashed it Ajay!
@CodeEmporium
@CodeEmporium Жыл бұрын
Bwahaha thanks Nick !
@NicholasRenotte
@NicholasRenotte Жыл бұрын
@@CodeEmporium anytime man, you’re so fluid with your explanations. Inspiring as hell!!
@devharal6541
@devharal6541 Жыл бұрын
Your videos are so accurate and intution behind learning via connecting concepts to machine learning is just awesome.
@chinmayeejoshi2119
@chinmayeejoshi2119 2 жыл бұрын
This is learning series has been excellent. Danke!
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Thanks so much for watching 🎉 :)
@badriveera8941
@badriveera8941 9 ай бұрын
Great set of videos. One subtle point of clarification. If fy(yi) is a probability density function, then the value of fy(yi) for a particular house price would be zero since it is a continuous variable. How do you reconcile that? Appreciate your thoughts on this.
@virgenalosveinte5915
@virgenalosveinte5915 2 ай бұрын
Great question, I would also love to hear an explanation.
@rishidixit7939
@rishidixit7939 18 күн бұрын
How to study Probability Theory for Deep Learning ?
@carsten011640
@carsten011640 Жыл бұрын
Hi, you're videos in this series have been so useful for my understanding, thank you! Could I clarify something please? At 20:00, you say "in reality, all these PDFs can be assumed to be the same...practically meaning that probability that house #1 is $700-800 is the same for house #2 too, and all other houses". I'm wondering whether this correct, my understanding: the PDFs are the same for every X value (gaussian) but they centre around a new mean for every value X value too. House number 1's X values mean that it will have a certain probability of being $700-800 according to the linear equation's y^ estimate at that X value. And house number 2's X value would follow that it will have a different probability of being $700-800 according the y^ estimate at that X value. Is this a correct interpretation? Again, thank you so much for this series.
@jaxejaxejaxe
@jaxejaxejaxe Жыл бұрын
This assumption is a very high-level assumption and we don't need to talk about any estimators to understand it. You should think of this assumption as coming before any math is done at all. Generally: We try to find the best hypothesis/ML-model/prediction-rule from our sample data that can predict well on new data. Therefore, we assume that our sample data has been "given to us" from some unknown distribution. We don't know this distribution, but we have to assume that all the data come from this _same_, unknown distribution. In this example, it means that everytime a house-price is "sampled" it comes from this unknown distribution putting out prices on the houses. This means that for any houses x and z, the probability of them costing any number (700-800k for instance) is the same.
@keren718
@keren718 Жыл бұрын
I love your series. I wonder why there is (-1)x in the last derivation
@CodeEmporium
@CodeEmporium Жыл бұрын
Thanks so much. The last term (if derived) will become the squared loss. And this loss needs to be minimized. It’s the same as maximizing the (-1) * the same value
@mustafizurrahman5699
@mustafizurrahman5699 2 жыл бұрын
Excellent
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Thanks a ton for tuning it!! :)
@theforthdoctor7872
@theforthdoctor7872 2 жыл бұрын
You forgot to mention the "Bambleweeny 57 Sub-Meson Brain" and the "atomic vector plotter".😉
@badermuteb4552
@badermuteb4552 2 жыл бұрын
argmax same as max???
@CodeEmporium
@CodeEmporium 2 жыл бұрын
They are not. Max will return “what is the maximum value of this function”. But arg max is “what is the value of the parameters such that the function is maximized”
@vtrandal
@vtrandal Жыл бұрын
@@CodeEmporium You are very very good with this subject matter. Thank you for making these great videos!
Linear Regression and Multiple Regression
12:55
CodeEmporium
Рет қаралды 226 М.
Probability Density Functions - EXPLAINED!
28:47
CodeEmporium
Рет қаралды 3,7 М.
Beat Ronaldo, Win $1,000,000
22:45
MrBeast
Рет қаралды 88 МЛН
Farmer narrowly escapes tiger attack
00:20
CTV News
Рет қаралды 13 МЛН
Breaking Linear Regression
13:04
CodeEmporium
Рет қаралды 8 М.
Logistic Regression - VISUALIZED!
18:31
CodeEmporium
Рет қаралды 27 М.
Random Variables - EXPLAINED!
16:39
CodeEmporium
Рет қаралды 15 М.
Recap Of BASIC Probability Theory For Machine Learning
11:19
Jan Nalivaika
Рет қаралды 412
What Level of Probability for Machine Learning? (Episode 6)
10:30
Lazy Programmer
Рет қаралды 8 М.
Probability Distribution Functions - EXPLAINED!
19:01
CodeEmporium
Рет қаралды 6 М.
Bayes theorem, the geometry of changing beliefs
15:11
3Blue1Brown
Рет қаралды 4,5 МЛН