You made it easy to you so much for the professional and simplicity you put in the videos are very underrated.
@gollum955304 жыл бұрын
I've watched dozens of videos on regularization and your explanation is perfect! thanks!
@staciarocco83053 жыл бұрын
Managed to make this work thanks to your steps Ahmad !!
@kaitlynmorgan84414 жыл бұрын
Thank you so much for this video, I love you! I hope it helps for the exam tomorrow
@grajov1233 жыл бұрын
One of the best lectures on ridge regression
@silvacervantez37373 жыл бұрын
A machine learning guru !!
@franco96464 жыл бұрын
Keep up the good work, blessing from USA my dear friend :)
@harrietkinzer79874 жыл бұрын
Awesome lecture and very clear you a lot.
@dataexpress43994 жыл бұрын
Amazing Explanation
@maryameek56614 жыл бұрын
Excellent channel, thank you a lot for your helpful, and well organized videos.
@cucciolatina3 жыл бұрын
Lovely tutorial ahmad !
@turanyapraklioglu77594 жыл бұрын
Great video this series coming in simple and useful.
@wendellhaviland50253 жыл бұрын
Perfect lesson
@kylepatrick35263 жыл бұрын
what an up-to-point tutorial 🙏🏻
@MrKean14 жыл бұрын
Thank you important lecture.
@jazmynealicia25063 жыл бұрын
Thank you Dr
@Mel32G4 жыл бұрын
Please make a video on elastic net regressors.
@alexfreeman10364 жыл бұрын
13:51 does that mean as alpha increases, then so does under fitting ?
@MyCocomartin4 жыл бұрын
The formula appearing in you please do a lecture to prove it ? I am having a hard time deriving you so much Ahmad.
@heathdotty30224 жыл бұрын
12:52 what If I set polynomial to false what will be the expected outcome ?
@awildacartier14184 жыл бұрын
I wish you worked as a professor in where I study.
@jamesluna98594 жыл бұрын
can't thank you enough :)
@latashiatrammel19804 жыл бұрын
12:12 I wonder if pipelines will actually speed up perfectly well when training neural networks, but I wonder if this is
@francinasylvie94474 жыл бұрын
Thank you for this video, it's so helpful! I can't believe, it's only 1200 consider Patreon account that people could reward you and thank you for your work.
@dataexpress43994 жыл бұрын
nice
@alikucukel37604 жыл бұрын
👏👏👏
@braxtonisabel14214 жыл бұрын
Oh this was recorded in Lebanon ? Aren’t you in France ?
@gladiscerezo34474 жыл бұрын
Lectures are coming in smart, have one concern seems like you are a bit tired while you feel okay ?
@yunyizhao82543 жыл бұрын
May I ask why you use half of alpha kzbin.info/www/bejne/hqWlpqR3mcejnLs rather than just use alpha?
@AhmadBazzi3 жыл бұрын
Well since when computing the gradient (with respect to theta_i), you will end up with the gradient of the MSE term plus that of the penalized term. The gradient of the penalized term would then be alpha x theta_i (instead of 2 x alpha x theta_i) .. This would just make interpreting the gradient much simpler. In theory, you could multiply alpha with whatever you want, you would only be giving more/less importance to the penalty.
@michaelford95694 жыл бұрын
I just did a Google search and turns out that Ahmad Bazzi is a signal processing I ask from where do you buy time to publish all those high quality you own a company editing and helping you record, I don’t
@valerieesther70564 жыл бұрын
1:46 “then instead of just minimizing this guy, you would want to minimize the MSE plus a penalty”.This guy literally talks mathematics.