Bayesian Linear Regression : Data Science Concepts

  Рет қаралды 88,174

ritvikmath

ritvikmath

Күн бұрын

Пікірлер: 179
@brycedavis5674
@brycedavis5674 3 жыл бұрын
As soon as you explained the results from Bayesian my jaw was wide open for like 3 minutes this is so interesting
@tobias2688
@tobias2688 3 жыл бұрын
This video is a true gem, informative and simple at once. Thank you so much!
@ritvikmath
@ritvikmath 3 жыл бұрын
Glad it was helpful!
@sudipanpaul805
@sudipanpaul805 Жыл бұрын
Love you, bro, I got my joining letter from NASA as a Scientific Officer-1, believe me, your videos always helped me in my research works.
@kunalchakraborty3037
@kunalchakraborty3037 3 жыл бұрын
Read it on a book. Didn't understand jack shit back then. Your videos are awesome. Rich, small, consise. Please make a video on Linear Discriminant Analysis and how its related to bay's theorem. This video will be saved in my data science playlist.
@jlpicard7
@jlpicard7 Жыл бұрын
I've seen everything in this video many, many times, but no one had done as good a job as this in pulling these ideas together in such an intuitive and understandable way. Well done and thank you!
@icybrain8943
@icybrain8943 3 жыл бұрын
Regardless of how they were really initially devised, seeing the regularization formulas pop out of the bayesian linear regression model was eye-opening - thanks for sharing this insight
@dennisleet9394
@dennisleet9394 2 жыл бұрын
Yes. This really blew my mind. Boom.
@fluidice1656
@fluidice1656 2 жыл бұрын
This is my favorite video out of a large set of fantastic videos that you have made. It just brings everything together in such a brilliant way. I keep getting back to it over and over again. Thank you so much!
@Structuralmechanic
@Structuralmechanic 11 ай бұрын
Amazing, you kept it simple and showed how regularization terms in linear regression originated from Bayesian approach!! Thank U!
@mohammadkhalkhali9635
@mohammadkhalkhali9635 3 жыл бұрын
Man I'm going to copy-paste your video whenever I want to explain regularization to anyone! I knew the concept but I would never explain it the way you did. You nailed it!
@chenqu773
@chenqu773 2 жыл бұрын
For me, the coolest thing about statistics is that every time I do a refresh on these topics, I get some new ideas or understandings. It's lucky that I came across this video after a year, which could also explain why we need to "normalized" the X (0 centered, with stdev = 1) before we feed them into the MLP model, if we use regularization terms in the layers.
@mohammadmousavi1
@mohammadmousavi1 Жыл бұрын
Unbelievable, you explained linear reg, explained in simple terms Bayesian stat, and showed the connection under 20min .... Perfect
@rajanalexander4949
@rajanalexander4949 Жыл бұрын
This is incredible. Clear, well paced and explained. Thank you!
@MoumitaHanra
@MoumitaHanra 2 жыл бұрын
Best of all videos on Bayesian regression; other videos are so boring and long but this one has quality as well as ease of understanding..Thank you so much!
@davidelicalsi5915
@davidelicalsi5915 2 жыл бұрын
Brilliant and clear explanation, I was struggling to grasp the main idea for a Machine Learning exam but your video was a blessing. Thank you so much for the amazing work!
@dylanwatts4463
@dylanwatts4463 3 жыл бұрын
Amazing video! Really clearly explained! Keep em coming!
@ritvikmath
@ritvikmath 3 жыл бұрын
Glad you liked it!
@sebastianstrumbel4335
@sebastianstrumbel4335 3 жыл бұрын
Awesome explanation! Especially the details on the prior were so helpful!
@ritvikmath
@ritvikmath 3 жыл бұрын
Glad it was helpful!
@yohahnribeiro6029
@yohahnribeiro6029 3 ай бұрын
Man .. I absolutely love the way you explain the math and the breakdown of these concepts! Really really fantastic job ❤
@ritvikmath
@ritvikmath 3 ай бұрын
Thanks a ton!
@karannchew2534
@karannchew2534 Жыл бұрын
Notes for my future revision. *Priror β* 10:30 Value of Prior β is normally distributed. The by product of using Normal Distribution is Regularisation. Because the prior values of β won't be too large (or too small) from the mean. Regularisation keep values of β small.
@anantshukla3415
@anantshukla3415 Ай бұрын
Thank you so much for this.
@rishabhbhatt7373
@rishabhbhatt7373 2 жыл бұрын
Really good explanation. I really like how you gave context and connected all topics together and it make perfect sense. While maintaining the perfect balance b/w math and intution. Great worl. Thank You !
@Izzy0887
@Izzy0887 3 жыл бұрын
Man! What a great explanation of Bayesian Stats. It's all starting to make sense now. Thank you!!!
@nishantraj376
@nishantraj376 2 ай бұрын
One of the best explanation out there, thanks :)
@TejasEkawade
@TejasEkawade Жыл бұрын
This was an excellent introduction to Bayesian Regression. Thanks a lot!
@umutaltun9049
@umutaltun9049 2 жыл бұрын
It just blown my mind too. I can feel you brother. Thank you!
@ezragarcia6910
@ezragarcia6910 2 жыл бұрын
Mi mente explotó con este video. Gracias
@JohnJones-rp2wz
@JohnJones-rp2wz 3 жыл бұрын
Awesome explanation!
@shipan5940
@shipan5940 2 жыл бұрын
Max ( P(this is the best vid explaining these regressions | KZbin) )
@marcogelsomini7655
@marcogelsomini7655 2 жыл бұрын
very cool the link you explained between regularization and prior
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
This is truly cool. I had the same thing with the lambda. It’s good to know that it was not some engineering trick.
@mateoruizalvarez1733
@mateoruizalvarez1733 11 ай бұрын
Cristal clear! , thank you so much, the explanation is very structured and detailed
@joachimrosenberger2109
@joachimrosenberger2109 2 жыл бұрын
Thanks a lot! Great! I am reading Elements of Statistical Learning and did not understand what they were talking about. Now I got it.
@nirmalpatil5370
@nirmalpatil5370 2 жыл бұрын
This is brillian man! Brilliant! Literally solved where the lamda comes from!
@chenjus
@chenjus 3 жыл бұрын
This is the best explanation of L1 and L2 I've ever heard
@feelmiranda
@feelmiranda 3 жыл бұрын
Your videos are a true gem, and an inspiration even. I hope to be as instructive as you are if I ever become a teacher!
@narinpratap8790
@narinpratap8790 3 жыл бұрын
Awesome video. I didn't realize that the L1, L2 regularization had a connection with the Bayesian framework. Thanks for shedding some much needed light on the topic. Could you please also explain the role of MCMC Sampling within Bayesian Regression models? I recently implemented a Bayesian Linear Regression model using PyMC3, and there's definitely a lot of theory involved with regards to MCMC NUTS (No U-Turn) Samplers and the associated hyperparameters (Chains, Draws, Tune, etc.). I think it would be a valuable video for many of us. And of course, keep up the amazing work! :D
@ritvikmath
@ritvikmath 3 жыл бұрын
good suggestion!
@qiguosun129
@qiguosun129 3 жыл бұрын
Excellent tutorial! I have applied RIDGE as the loss function in different models. However, it is the first time I understand the mathematical meaning of lambda. It is really cool!
@dmc-au
@dmc-au Жыл бұрын
Wow, killer video. This was a topic where it was especially nice to see everything written on the board in one go. Was cool to see how a larger lambda implies a more pronounced prior belief that the parameters lie close to 0.
@ritvikmath
@ritvikmath Жыл бұрын
I also think it’s pretty cool 😎
@caiocfp
@caiocfp 3 жыл бұрын
Thank you for sharing this fantastic content.
@ritvikmath
@ritvikmath 3 жыл бұрын
Glad you enjoy it!
@juliocerono_stone5365
@juliocerono_stone5365 9 ай бұрын
at last!!! Now I can see what lamda was doing in tne lasso and ridge regression!! great video!!
@ritvikmath
@ritvikmath 9 ай бұрын
Glad you liked it!
@chuckleezy
@chuckleezy Жыл бұрын
you are so good at this, this video is amazing
@ritvikmath
@ritvikmath Жыл бұрын
Thank you so much!!
@curiousobserver2006
@curiousobserver2006 Жыл бұрын
This blew my mind.Thanks
@juliocerono5193
@juliocerono5193 9 ай бұрын
At last!! I could find an explanation for the lasso and ridge regression lamdas!!! Thank you!!!
@ritvikmath
@ritvikmath 9 ай бұрын
Happy to help!
@swapnajoysaha6982
@swapnajoysaha6982 10 ай бұрын
I used to be afraid of Bayesian Linear Regression until I saw this vid. Thank you sooo much
@ritvikmath
@ritvikmath 10 ай бұрын
Awesome! Youre welcome
@billsurrette6092
@billsurrette6092 Ай бұрын
Great video, I learned exactly what I was looking for. I have years of experience with machine learning but Bayesian approaches not so much. In a world full of poorly explained concepts, this video stands out as an exemplar, very well done. A few thoughts I had as I watched this. I always viewed regularization as a common sense approach, almost a heuristic. When you consider that you’re trying to minimize the loss function, while putting some constraint on the betas, it seems like a natural solution to simply add the magnitude or some function of the magnitude of the betas to that loss function because now by doing that you’re making the value of the lost function bigger, so in order for the algorithm to increase the value of beta it would really have to be worthwhile on the error term. Lasso and Ridge use absolute value and square, but the key is that they must be a measure of magnitude, ie they must be positive, so we could use a 4th degree or 6th degree or any even degree. I’m curious if each of these would have a Bayesian counterpart? Also, sigma/tau is given in the Bayesian approach, which lambda is tuned or solved for in the regularization approach, so while the functional form is the same there’s no guarantee that lambda will equal (sigma/tau)^2. I do wonder if E(lambda)=(sigma/tau)^2? Ie, if you solved for lambda over many samples from a population, would the average be (sigma/tau)^2, which would means lambda is an estimator of (sigma/tau)^2?
@chiawen.
@chiawen. Жыл бұрын
This is sooo clear. Thank you so much!
@FRequena
@FRequena 3 жыл бұрын
Super informative and clear lesson! Thank you very much!
@tj9796
@tj9796 3 жыл бұрын
Your videos are great. Love the connections you make so that stats is intuitive as opposed to plug and play formulas.
@SaiVivek15
@SaiVivek15 2 жыл бұрын
This video is super informative! It gave me the actual perspective on regularization.
@mkayletsplay5508
@mkayletsplay5508 4 күн бұрын
Really good video. Thank you so much!
@javiergonzalezarmas8250
@javiergonzalezarmas8250 2 жыл бұрын
Incredible explanation!
@julissaybarra4031
@julissaybarra4031 Жыл бұрын
This was incredible, thank you so much.
@FB0102
@FB0102 2 жыл бұрын
truly excellent explanation; well done
@brandonjones8928
@brandonjones8928 9 ай бұрын
This is an awesome explanation
@millch2k8
@millch2k8 Жыл бұрын
I'd never considered a Bayesian approach to linear regression let alone its relation to lasso/ridge regression. Really enlightening to see!
@ritvikmath
@ritvikmath Жыл бұрын
Thanks!
@dodg3r123
@dodg3r123 3 жыл бұрын
Love this content! More examples like this are appreciated
@ritvikmath
@ritvikmath 3 жыл бұрын
More to come!
@convex9345
@convex9345 3 жыл бұрын
mind boggling
@dirknowitzki9468
@dirknowitzki9468 3 жыл бұрын
Your videos are a Godsend!
@fktx3507
@fktx3507 3 жыл бұрын
Thanks, man. A really good and concise explanation of the approach (together with the video on Bayesian statistics).
@antaresd1
@antaresd1 Жыл бұрын
Thank you for this amazing video, It clarified many things to me!
@Aviationlads
@Aviationlads Жыл бұрын
Great video, do you have some sources I can use for my university presentation? You helped me a lot 🙏 thank you!
@matthewkumar7756
@matthewkumar7756 3 жыл бұрын
Mind blown on the connection between regularization and priors in linear regression
@AntonioMac3301
@AntonioMac3301 2 жыл бұрын
This video is amazing!!! so helpful and clear explanation
@houyao2147
@houyao2147 3 жыл бұрын
What a wonderful explanation!!
@ritvikmath
@ritvikmath 3 жыл бұрын
Glad you think so!
@Maciek17PL
@Maciek17PL 2 жыл бұрын
You are a great teacher thank you for your videos!!
@benjtheo414
@benjtheo414 Жыл бұрын
This was awesome, thanks a lot for your time :)
@shantanuneema
@shantanuneema 3 жыл бұрын
you got a subscriber, awesome explanation. I spent hours learning it from other source, but no success. You are just great
@alim5791
@alim5791 3 жыл бұрын
Thanks, that was a good one. Keep up the good work!
@mahdijavadi2747
@mahdijavadi2747 3 жыл бұрын
Thanks a lottttt! I had so much difficulty understanding this.
@kaartiki1451
@kaartiki1451 9 ай бұрын
Legendary video
@j29Productions
@j29Productions 11 ай бұрын
You are THE LEGEND
@amirkhoutir2649
@amirkhoutir2649 2 жыл бұрын
thank you so much for the great explanation
@manishbhanu2568
@manishbhanu2568 Жыл бұрын
you are a great teacher!!!🏆🏆🏆
@ritvikmath
@ritvikmath Жыл бұрын
Thank you! 😃
@rmiliming
@rmiliming 2 жыл бұрын
Tks a lot for this clear explanation !
@Life_on_wheeel
@Life_on_wheeel 3 жыл бұрын
Thanks for video.. Its really helpful.. I was trying to understand how regularization terms are coming.. Now i got. Thanks ..
@samirelamrany5323
@samirelamrany5323 Жыл бұрын
perfect explanation thank you
@SamuelMMuli-sy6wk
@SamuelMMuli-sy6wk 2 жыл бұрын
wonderful stuff! thank you
@axadify
@axadify 3 жыл бұрын
such a nice explanation. I mean thats the first time I actually understood it.
@julianneuer8131
@julianneuer8131 3 жыл бұрын
Excellent!
@ritvikmath
@ritvikmath 3 жыл бұрын
Thank you! Cheers!
@souravdey1227
@souravdey1227 2 жыл бұрын
Can you please please do a series on categorical distribution, multinomial distribution, Dirichlet distribution, Dirichlet process and finally non parametric Bayesian tensor factorisation including clustering of steaming data. I will personally pay you for this. I mean it!! There are a few videos on these things on youtube, some are good, some are way high-level. But, no one can explain the way you do. This simple video has such profound importance!!
@kennethnavarro3496
@kennethnavarro3496 3 жыл бұрын
Thank you very much. Pretty helpful video!
@godse54
@godse54 3 жыл бұрын
Nice i never thought that 👍🏼👍🏼
@petmackay
@petmackay 3 жыл бұрын
Most insightful! L1 as Laplacian toward the end was a bit skimpy, though. Maybe I should watch your LASSO clip. Could you do a video on elastic net? Insight on balancing the L1 and L2 norms would be appreciated.
@danielwiczew
@danielwiczew 3 жыл бұрын
Yea, Elasticnet and comparison to Ridge/Lasso would be very helpful
@bibiha3149
@bibiha3149 3 жыл бұрын
Thanks from korea 사랑해요!
@ritvikmath
@ritvikmath 3 жыл бұрын
You're welcome!!!
@vipinamar8323
@vipinamar8323 2 жыл бұрын
Great video with a very clear explanation. COuld you also do a video on Bayesian logistic regression
@yulinliu850
@yulinliu850 3 жыл бұрын
Beautiful!
@ritvikmath
@ritvikmath 3 жыл бұрын
Thank you! Cheers!
@adityagaurav2816
@adityagaurav2816 3 жыл бұрын
My mind is blown.....woow...
@haeunroh8945
@haeunroh8945 3 жыл бұрын
your videos are awesome so much better than my prof
@yodarocco
@yodarocco Жыл бұрын
At the end I understand it too finally. A hint for peaple who also struggle on BR like me: do a Bayesian linear regression in Python from any tutorial that you find online, you are going to understand, trust me. I think that one of the initial problems for a person that face a Bayesian approach it’s the fact that you are actually obtaining a posterior *of weights*!. Now looks kinda obvious but at the beginning I was really stuck, I could not understand what was actually the posterior doing.
@undertaker7523
@undertaker7523 2 жыл бұрын
You are the go-to for me when I need to understand topics better. I understand Bayesian parameter estimation thanks to this video! Any chance you can do something on the difference between Maximum Likelihood and Bayesian parameter estimation? I think anyone that watches both of your videos will be able to pick up the details but seeing it explicitly might go a long way for some.
@louisc2016
@louisc2016 3 жыл бұрын
fantastic! u r my savor!
@rachelbarnes7469
@rachelbarnes7469 3 жыл бұрын
thank you so much for this
@abdelkaderbousabaa7020
@abdelkaderbousabaa7020 3 жыл бұрын
Excellent thank you
@jairjuliocc
@jairjuliocc 3 жыл бұрын
Thank You , I saw this before but i didnt understand. Please , where can i find the complete derivation? And maybe You can do a complete series in this topic
@ThePiotrekpecet
@ThePiotrekpecet Жыл бұрын
There is an error at the beginning of the video, in frequentist approaches X is treated as non random covariate data and y is the random part so the high variance of OLS should be expressed as small changes to y => big changes to OLS estimator. The changes to covariate matrix becoming big changes to OLS estimator is more like a non robustness of OLS wrt outlier contamination. Also the lambda should be 1/2τ^2 not σ^2/τ^2 since: ln(P(β))=-p * ln(τ * √2*π) - ||β||₂/2τ^2 Overall this was very helpful cheers!
@datle1339
@datle1339 2 жыл бұрын
very great, thank you
@chenqu773
@chenqu773 3 жыл бұрын
Thank you very much
@AnotherBrickinWall
@AnotherBrickinWall Жыл бұрын
Great thanks! .. was feeling the same discomfort about the origin of these...
@allanvieiradecastroquadros1391
@allanvieiradecastroquadros1391 Жыл бұрын
Respect! 👏
@ritvikmath
@ritvikmath Жыл бұрын
Thanks!
@jaivratsingh9966
@jaivratsingh9966 2 жыл бұрын
Excellent
@TK-mv6sq
@TK-mv6sq 2 жыл бұрын
thank you!
What the Heck is Bayesian Stats ?? : Data Science Basics
20:30
ritvikmath
Рет қаралды 71 М.
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 53 МЛН
It’s all not real
00:15
V.A. show / Магика
Рет қаралды 20 МЛН
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 62 МЛН
Regularization Part 1: Ridge (L2) Regression
20:27
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
Gaussian Processes
23:47
Mutual Information
Рет қаралды 139 М.
The Bayesian Trap
10:37
Veritasium
Рет қаралды 4,2 МЛН
The Unreasonable Effectiveness of Bayesian Prediction
15:03
ritvikmath
Рет қаралды 22 М.
11d Machine Learning: Bayesian Linear Regression
15:20
GeostatsGuy Lectures
Рет қаралды 48 М.
Gaussian Processes : Data Science Concepts
24:47
ritvikmath
Рет қаралды 18 М.
The Battle of Polynomials | Towards Bayesian Regression
31:31
Kapil Sachdeva
Рет қаралды 10 М.
Tutorial 10: Bayesian Inference: Part 11: Bayesian Linear Regression in Python
27:29
17. Bayesian Statistics
1:18:05
MIT OpenCourseWare
Рет қаралды 239 М.
The evil clown plays a prank on the angel
00:39
超人夫妇
Рет қаралды 53 МЛН