As soon as you explained the results from Bayesian my jaw was wide open for like 3 minutes this is so interesting
@tobias26883 жыл бұрын
This video is a true gem, informative and simple at once. Thank you so much!
@ritvikmath3 жыл бұрын
Glad it was helpful!
@sudipanpaul805 Жыл бұрын
Love you, bro, I got my joining letter from NASA as a Scientific Officer-1, believe me, your videos always helped me in my research works.
@kunalchakraborty30373 жыл бұрын
Read it on a book. Didn't understand jack shit back then. Your videos are awesome. Rich, small, consise. Please make a video on Linear Discriminant Analysis and how its related to bay's theorem. This video will be saved in my data science playlist.
@jlpicard7 Жыл бұрын
I've seen everything in this video many, many times, but no one had done as good a job as this in pulling these ideas together in such an intuitive and understandable way. Well done and thank you!
@icybrain89433 жыл бұрын
Regardless of how they were really initially devised, seeing the regularization formulas pop out of the bayesian linear regression model was eye-opening - thanks for sharing this insight
@dennisleet93942 жыл бұрын
Yes. This really blew my mind. Boom.
@fluidice16562 жыл бұрын
This is my favorite video out of a large set of fantastic videos that you have made. It just brings everything together in such a brilliant way. I keep getting back to it over and over again. Thank you so much!
@Structuralmechanic11 ай бұрын
Amazing, you kept it simple and showed how regularization terms in linear regression originated from Bayesian approach!! Thank U!
@mohammadkhalkhali96353 жыл бұрын
Man I'm going to copy-paste your video whenever I want to explain regularization to anyone! I knew the concept but I would never explain it the way you did. You nailed it!
@chenqu7732 жыл бұрын
For me, the coolest thing about statistics is that every time I do a refresh on these topics, I get some new ideas or understandings. It's lucky that I came across this video after a year, which could also explain why we need to "normalized" the X (0 centered, with stdev = 1) before we feed them into the MLP model, if we use regularization terms in the layers.
@mohammadmousavi1 Жыл бұрын
Unbelievable, you explained linear reg, explained in simple terms Bayesian stat, and showed the connection under 20min .... Perfect
@rajanalexander4949 Жыл бұрын
This is incredible. Clear, well paced and explained. Thank you!
@MoumitaHanra2 жыл бұрын
Best of all videos on Bayesian regression; other videos are so boring and long but this one has quality as well as ease of understanding..Thank you so much!
@davidelicalsi59152 жыл бұрын
Brilliant and clear explanation, I was struggling to grasp the main idea for a Machine Learning exam but your video was a blessing. Thank you so much for the amazing work!
@dylanwatts44633 жыл бұрын
Amazing video! Really clearly explained! Keep em coming!
@ritvikmath3 жыл бұрын
Glad you liked it!
@sebastianstrumbel43353 жыл бұрын
Awesome explanation! Especially the details on the prior were so helpful!
@ritvikmath3 жыл бұрын
Glad it was helpful!
@yohahnribeiro60293 ай бұрын
Man .. I absolutely love the way you explain the math and the breakdown of these concepts! Really really fantastic job ❤
@ritvikmath3 ай бұрын
Thanks a ton!
@karannchew2534 Жыл бұрын
Notes for my future revision. *Priror β* 10:30 Value of Prior β is normally distributed. The by product of using Normal Distribution is Regularisation. Because the prior values of β won't be too large (or too small) from the mean. Regularisation keep values of β small.
@anantshukla3415Ай бұрын
Thank you so much for this.
@rishabhbhatt73732 жыл бұрын
Really good explanation. I really like how you gave context and connected all topics together and it make perfect sense. While maintaining the perfect balance b/w math and intution. Great worl. Thank You !
@Izzy08873 жыл бұрын
Man! What a great explanation of Bayesian Stats. It's all starting to make sense now. Thank you!!!
@nishantraj3762 ай бұрын
One of the best explanation out there, thanks :)
@TejasEkawade Жыл бұрын
This was an excellent introduction to Bayesian Regression. Thanks a lot!
@umutaltun90492 жыл бұрын
It just blown my mind too. I can feel you brother. Thank you!
@ezragarcia69102 жыл бұрын
Mi mente explotó con este video. Gracias
@JohnJones-rp2wz3 жыл бұрын
Awesome explanation!
@shipan59402 жыл бұрын
Max ( P(this is the best vid explaining these regressions | KZbin) )
@marcogelsomini76552 жыл бұрын
very cool the link you explained between regularization and prior
@user-or7ji5hv8y3 жыл бұрын
This is truly cool. I had the same thing with the lambda. It’s good to know that it was not some engineering trick.
@mateoruizalvarez173311 ай бұрын
Cristal clear! , thank you so much, the explanation is very structured and detailed
@joachimrosenberger21092 жыл бұрын
Thanks a lot! Great! I am reading Elements of Statistical Learning and did not understand what they were talking about. Now I got it.
@nirmalpatil53702 жыл бұрын
This is brillian man! Brilliant! Literally solved where the lamda comes from!
@chenjus3 жыл бұрын
This is the best explanation of L1 and L2 I've ever heard
@feelmiranda3 жыл бұрын
Your videos are a true gem, and an inspiration even. I hope to be as instructive as you are if I ever become a teacher!
@narinpratap87903 жыл бұрын
Awesome video. I didn't realize that the L1, L2 regularization had a connection with the Bayesian framework. Thanks for shedding some much needed light on the topic. Could you please also explain the role of MCMC Sampling within Bayesian Regression models? I recently implemented a Bayesian Linear Regression model using PyMC3, and there's definitely a lot of theory involved with regards to MCMC NUTS (No U-Turn) Samplers and the associated hyperparameters (Chains, Draws, Tune, etc.). I think it would be a valuable video for many of us. And of course, keep up the amazing work! :D
@ritvikmath3 жыл бұрын
good suggestion!
@qiguosun1293 жыл бұрын
Excellent tutorial! I have applied RIDGE as the loss function in different models. However, it is the first time I understand the mathematical meaning of lambda. It is really cool!
@dmc-au Жыл бұрын
Wow, killer video. This was a topic where it was especially nice to see everything written on the board in one go. Was cool to see how a larger lambda implies a more pronounced prior belief that the parameters lie close to 0.
@ritvikmath Жыл бұрын
I also think it’s pretty cool 😎
@caiocfp3 жыл бұрын
Thank you for sharing this fantastic content.
@ritvikmath3 жыл бұрын
Glad you enjoy it!
@juliocerono_stone53659 ай бұрын
at last!!! Now I can see what lamda was doing in tne lasso and ridge regression!! great video!!
@ritvikmath9 ай бұрын
Glad you liked it!
@chuckleezy Жыл бұрын
you are so good at this, this video is amazing
@ritvikmath Жыл бұрын
Thank you so much!!
@curiousobserver2006 Жыл бұрын
This blew my mind.Thanks
@juliocerono51939 ай бұрын
At last!! I could find an explanation for the lasso and ridge regression lamdas!!! Thank you!!!
@ritvikmath9 ай бұрын
Happy to help!
@swapnajoysaha698210 ай бұрын
I used to be afraid of Bayesian Linear Regression until I saw this vid. Thank you sooo much
@ritvikmath10 ай бұрын
Awesome! Youre welcome
@billsurrette6092Ай бұрын
Great video, I learned exactly what I was looking for. I have years of experience with machine learning but Bayesian approaches not so much. In a world full of poorly explained concepts, this video stands out as an exemplar, very well done. A few thoughts I had as I watched this. I always viewed regularization as a common sense approach, almost a heuristic. When you consider that you’re trying to minimize the loss function, while putting some constraint on the betas, it seems like a natural solution to simply add the magnitude or some function of the magnitude of the betas to that loss function because now by doing that you’re making the value of the lost function bigger, so in order for the algorithm to increase the value of beta it would really have to be worthwhile on the error term. Lasso and Ridge use absolute value and square, but the key is that they must be a measure of magnitude, ie they must be positive, so we could use a 4th degree or 6th degree or any even degree. I’m curious if each of these would have a Bayesian counterpart? Also, sigma/tau is given in the Bayesian approach, which lambda is tuned or solved for in the regularization approach, so while the functional form is the same there’s no guarantee that lambda will equal (sigma/tau)^2. I do wonder if E(lambda)=(sigma/tau)^2? Ie, if you solved for lambda over many samples from a population, would the average be (sigma/tau)^2, which would means lambda is an estimator of (sigma/tau)^2?
@chiawen. Жыл бұрын
This is sooo clear. Thank you so much!
@FRequena3 жыл бұрын
Super informative and clear lesson! Thank you very much!
@tj97963 жыл бұрын
Your videos are great. Love the connections you make so that stats is intuitive as opposed to plug and play formulas.
@SaiVivek152 жыл бұрын
This video is super informative! It gave me the actual perspective on regularization.
@mkayletsplay55084 күн бұрын
Really good video. Thank you so much!
@javiergonzalezarmas82502 жыл бұрын
Incredible explanation!
@julissaybarra4031 Жыл бұрын
This was incredible, thank you so much.
@FB01022 жыл бұрын
truly excellent explanation; well done
@brandonjones89289 ай бұрын
This is an awesome explanation
@millch2k8 Жыл бұрын
I'd never considered a Bayesian approach to linear regression let alone its relation to lasso/ridge regression. Really enlightening to see!
@ritvikmath Жыл бұрын
Thanks!
@dodg3r1233 жыл бұрын
Love this content! More examples like this are appreciated
@ritvikmath3 жыл бұрын
More to come!
@convex93453 жыл бұрын
mind boggling
@dirknowitzki94683 жыл бұрын
Your videos are a Godsend!
@fktx35073 жыл бұрын
Thanks, man. A really good and concise explanation of the approach (together with the video on Bayesian statistics).
@antaresd1 Жыл бұрын
Thank you for this amazing video, It clarified many things to me!
@Aviationlads Жыл бұрын
Great video, do you have some sources I can use for my university presentation? You helped me a lot 🙏 thank you!
@matthewkumar77563 жыл бұрын
Mind blown on the connection between regularization and priors in linear regression
@AntonioMac33012 жыл бұрын
This video is amazing!!! so helpful and clear explanation
@houyao21473 жыл бұрын
What a wonderful explanation!!
@ritvikmath3 жыл бұрын
Glad you think so!
@Maciek17PL2 жыл бұрын
You are a great teacher thank you for your videos!!
@benjtheo414 Жыл бұрын
This was awesome, thanks a lot for your time :)
@shantanuneema3 жыл бұрын
you got a subscriber, awesome explanation. I spent hours learning it from other source, but no success. You are just great
@alim57913 жыл бұрын
Thanks, that was a good one. Keep up the good work!
@mahdijavadi27473 жыл бұрын
Thanks a lottttt! I had so much difficulty understanding this.
@kaartiki14519 ай бұрын
Legendary video
@j29Productions11 ай бұрын
You are THE LEGEND
@amirkhoutir26492 жыл бұрын
thank you so much for the great explanation
@manishbhanu2568 Жыл бұрын
you are a great teacher!!!🏆🏆🏆
@ritvikmath Жыл бұрын
Thank you! 😃
@rmiliming2 жыл бұрын
Tks a lot for this clear explanation !
@Life_on_wheeel3 жыл бұрын
Thanks for video.. Its really helpful.. I was trying to understand how regularization terms are coming.. Now i got. Thanks ..
@samirelamrany5323 Жыл бұрын
perfect explanation thank you
@SamuelMMuli-sy6wk2 жыл бұрын
wonderful stuff! thank you
@axadify3 жыл бұрын
such a nice explanation. I mean thats the first time I actually understood it.
@julianneuer81313 жыл бұрын
Excellent!
@ritvikmath3 жыл бұрын
Thank you! Cheers!
@souravdey12272 жыл бұрын
Can you please please do a series on categorical distribution, multinomial distribution, Dirichlet distribution, Dirichlet process and finally non parametric Bayesian tensor factorisation including clustering of steaming data. I will personally pay you for this. I mean it!! There are a few videos on these things on youtube, some are good, some are way high-level. But, no one can explain the way you do. This simple video has such profound importance!!
@kennethnavarro34963 жыл бұрын
Thank you very much. Pretty helpful video!
@godse543 жыл бұрын
Nice i never thought that 👍🏼👍🏼
@petmackay3 жыл бұрын
Most insightful! L1 as Laplacian toward the end was a bit skimpy, though. Maybe I should watch your LASSO clip. Could you do a video on elastic net? Insight on balancing the L1 and L2 norms would be appreciated.
@danielwiczew3 жыл бұрын
Yea, Elasticnet and comparison to Ridge/Lasso would be very helpful
@bibiha31493 жыл бұрын
Thanks from korea 사랑해요!
@ritvikmath3 жыл бұрын
You're welcome!!!
@vipinamar83232 жыл бұрын
Great video with a very clear explanation. COuld you also do a video on Bayesian logistic regression
@yulinliu8503 жыл бұрын
Beautiful!
@ritvikmath3 жыл бұрын
Thank you! Cheers!
@adityagaurav28163 жыл бұрын
My mind is blown.....woow...
@haeunroh89453 жыл бұрын
your videos are awesome so much better than my prof
@yodarocco Жыл бұрын
At the end I understand it too finally. A hint for peaple who also struggle on BR like me: do a Bayesian linear regression in Python from any tutorial that you find online, you are going to understand, trust me. I think that one of the initial problems for a person that face a Bayesian approach it’s the fact that you are actually obtaining a posterior *of weights*!. Now looks kinda obvious but at the beginning I was really stuck, I could not understand what was actually the posterior doing.
@undertaker75232 жыл бұрын
You are the go-to for me when I need to understand topics better. I understand Bayesian parameter estimation thanks to this video! Any chance you can do something on the difference between Maximum Likelihood and Bayesian parameter estimation? I think anyone that watches both of your videos will be able to pick up the details but seeing it explicitly might go a long way for some.
@louisc20163 жыл бұрын
fantastic! u r my savor!
@rachelbarnes74693 жыл бұрын
thank you so much for this
@abdelkaderbousabaa70203 жыл бұрын
Excellent thank you
@jairjuliocc3 жыл бұрын
Thank You , I saw this before but i didnt understand. Please , where can i find the complete derivation? And maybe You can do a complete series in this topic
@ThePiotrekpecet Жыл бұрын
There is an error at the beginning of the video, in frequentist approaches X is treated as non random covariate data and y is the random part so the high variance of OLS should be expressed as small changes to y => big changes to OLS estimator. The changes to covariate matrix becoming big changes to OLS estimator is more like a non robustness of OLS wrt outlier contamination. Also the lambda should be 1/2τ^2 not σ^2/τ^2 since: ln(P(β))=-p * ln(τ * √2*π) - ||β||₂/2τ^2 Overall this was very helpful cheers!
@datle13392 жыл бұрын
very great, thank you
@chenqu7733 жыл бұрын
Thank you very much
@AnotherBrickinWall Жыл бұрын
Great thanks! .. was feeling the same discomfort about the origin of these...