L1 and L2 Regularization in Machine Learning: Easy Explanation for Data Science Interviews

  Рет қаралды 6,674

Emma Ding

Emma Ding

Күн бұрын

Regularization is a machine learning technique that introduces a regularization term to the loss function of a model in order to improve the generalization of a model. In this video, I explain both L1 and L2 regularizations, the main differences between the two methods, and leave you with helpful pros and cons so you can best decide when to implement each function.
🟢Get all my free data science interview resources
www.emmading.com/resources
🟡 Product Case Interview Cheatsheet www.emmading.com/product-case...
🟠 Statistics Interview Cheatsheet www.emmading.com/statistics-i...
🟣 Behavioral Interview Cheatsheet www.emmading.com/behavioral-i...
🔵 Data Science Resume Checklist www.emmading.com/data-science...
✅ We work with Experienced Data Scientists to help them land their next dream jobs. Apply now: www.emmading.com/coaching
// Comment
Got any questions? Something to add?
Write a comment below to chat.
// Let's connect on LinkedIn:
/ emmading001
====================
Contents of this video:
====================
00:00 Introduction
00:21 Interview Questions
00:41 What is regularization?
01:27 When to use regularization?
01:47 Regularization techniques
03:44 L1 and L2 regularizations
03:55 L1 Regularization
08:03 L2 Regularization
10:50 L1 vs. L2 Regularization
11:47 Outro

Пікірлер: 11
@MinhNguyen-lz1pg
@MinhNguyen-lz1pg Жыл бұрын
Man, I tried to find videos and blog post about this topic and most of them just scratch the surface. Thanks for the deep analysis and comparison!
@emma_ding
@emma_ding Жыл бұрын
So glad you found it helpful, Minh! Thanks for watching. 😊
@AllieZhao
@AllieZhao Жыл бұрын
Much clearer than what I learned from elsewhere. I also noticed that you slowed the pace of speaking which is nice for people to follow
@dimasushko9023
@dimasushko9023 3 ай бұрын
Some details who'd like to to get into that really deep: regarding L1 penalty - we can't really choose w2 = 0 and w1 = 1, since loss consists of 2 parts: loss from the diamond shape (l1 penalty) + loss from the ellips shape (initial loss function). since for (w1, w2) pairs (0, 1) and (1, 0) l1-penalty term values are the same and equals 1 + 0 = 0 + 1 = 1, we now look at the initial loss function as it just as well depends on the (w1, w2) values. for w1 = 0 and w2 = 1 (closer to the center point) loss is going to be less than for w1 = 1 and w2 = 0 (farther from the center point) - we can see that from the contour lines plot. therefore, optimizer won't go there and will converge on w1 = 0 and w2 = 1 and that's it.
@HumbertoMoura
@HumbertoMoura Жыл бұрын
Great explanation, Emma! Have a nice day!
@emma_ding
@emma_ding Жыл бұрын
Many of you have asked me to share my presentation notes, and now… I have them for you! Download all the PDFs of my Notion pages at www.emmading.com/get-all-my-free-resources. Enjoy!
@louisforlibertarian
@louisforlibertarian Жыл бұрын
Great vid! It's like a fast recap of a college stat class.
@crystalcleargirl07
@crystalcleargirl07 6 ай бұрын
thank you so much for this clear explanation. It has helped me more than the Coursera course.
@muhannedalogaidi7877
@muhannedalogaidi7877 10 ай бұрын
Hello Emma, I started my switching to AI/ML and noticed your website and courses. does your training or courses suitable for beginner? . also am not sure if you you have special coursers for statistics and mathematics for AI/ML .. Thank you
@cosmicfluke3718
@cosmicfluke3718 Жыл бұрын
How can increasing alpha would decrease the weight can you pls explain. Now 0.1 is bigger than 0.001 if I have weight as 0.4 now 0.1 *0.4= 0.04 where as 0.001 *0.4 would be 0.0004 now lesser the alpha lesser the weight which will be near to zero correct. I feel what you mean by bigger alpha is alpha with bigger negative power isn't that correct. Can you please clarify
@davidskarbrevik
@davidskarbrevik Жыл бұрын
"increasing the alpha would decrease the weight" does not refer to the calculation of alpha * weights, it refers to what happens when you minimize your regularized loss function.
ИРИНА КАЙРАТОВНА - АЙДАХАР (БЕКА) [MV]
02:51
ГОСТ ENTERTAINMENT
Рет қаралды 2,7 МЛН
OMG🤪 #tiktok #shorts #potapova_blog
00:50
Potapova_blog
Рет қаралды 15 МЛН
Пробую самое сладкое вещество во Вселенной
00:41
Sparsity and the L1 Norm
10:59
Steve Brunton
Рет қаралды 47 М.
Regularization - Explained!
12:44
CodeEmporium
Рет қаралды 13 М.
Regularization in a Neural Network | Dealing with overfitting
11:40
ИРИНА КАЙРАТОВНА - АЙДАХАР (БЕКА) [MV]
02:51
ГОСТ ENTERTAINMENT
Рет қаралды 2,7 МЛН