Loss Functions - EXPLAINED!

  Рет қаралды 142,804

CodeEmporium

CodeEmporium

Күн бұрын

Пікірлер: 84
@sephchan50
@sephchan50 4 жыл бұрын
'check it out when you want to lower your self esteem' damn that sentence caught me off guard
@BB-rh5bp
@BB-rh5bp 4 жыл бұрын
Nice voice, tones, speed, and graphics! Perfect for studying anytime. Thanks!
@oscarsal433
@oscarsal433 4 жыл бұрын
Wow impresive more value than dozens of study hours!!! Love it!
@CodeEmporium
@CodeEmporium 4 жыл бұрын
Im Glad this helps people like you :)
@MarcelloNesca
@MarcelloNesca 3 жыл бұрын
This was an incredibly accessible video! You explained complex subjects simply enough to make me curious about going deeper in loss functions. Very awesome, thank you
@CodeEmporium
@CodeEmporium 3 жыл бұрын
Perfect! That's the intention :)
@kayingtang1493
@kayingtang1493 3 жыл бұрын
Finally find a video that explains loss function clearly. Thanks for your effort!!
@CodeEmporium
@CodeEmporium 3 жыл бұрын
Glad it helps!
@bhargav7476
@bhargav7476 3 жыл бұрын
Welsch loss is what I needed for my problem, thanks
@geeky_explorer9105
@geeky_explorer9105 3 жыл бұрын
Mannnnn! you nailed it, thanks a lot for this video, great informative lecture, keep making this contents and keep inspiring us
@mahaalabduljalil6596
@mahaalabduljalil6596 5 ай бұрын
I enjoyed the video so much but honestly subscribed to keep the lights ON in your apartment :)
@hillosand
@hillosand 3 жыл бұрын
Absolute loss does not 'ignore outliers altogether'. Outliers still have more effect on the absolute loss function than non-outliers do, it's just not as extreme an effect as MSE. The right-hand graphic at about 2:35 is wrong, the fitted line would be shifted much more upwards for low values of x.
@handokosupeno5425
@handokosupeno5425 2 жыл бұрын
Amazing explanation bro
@ShouseD
@ShouseD 3 жыл бұрын
Awesome explanation!
@GauravSharma-ui4yd
@GauravSharma-ui4yd 5 жыл бұрын
Hey ajay this is gaurav, I always loved to watch your in depth videos. Please make indepth videos on various optimizers like adam, adagrad, ftrl and all other
@CodeEmporium
@CodeEmporium 5 жыл бұрын
Thanks! Thinking about that optimizer video.
@AhmedThahir2002
@AhmedThahir2002 Жыл бұрын
How is Squared Loss equivalent to setting alpha=2 in the adaptive loss? Won't alpha=2 make the numerator as 2-2=0?
@1337Rinz
@1337Rinz Жыл бұрын
bro you are my hero 🥰
@emrek1
@emrek1 Жыл бұрын
You said pseudo-huber loss is differentiable at 3:00 but it is not continuous if alpha is not 1. It may not be differentiable even when alpha is 1, am I wrong?
@satyendrayadav3123
@satyendrayadav3123 4 жыл бұрын
You are awesome man!!!
@templetonpeck1084
@templetonpeck1084 2 жыл бұрын
This video was the was freaking awesome!
@Manicscitzo
@Manicscitzo 3 жыл бұрын
Well explained! Good work
@CodeEmporium
@CodeEmporium 3 жыл бұрын
Thanks!
@gregorygreif2140
@gregorygreif2140 2 жыл бұрын
I love your animations! How did you make them?
@KaranKinariwala1995
@KaranKinariwala1995 3 жыл бұрын
Hey, firstly great explanation! Thanks! Secondly, I have a minor clarification. Can we then, in conclusion, say that finding the best loss function for a particular task also depends on the second derivative of the loss function? Since the final loss depends on the convex nature of the loss function.
@vivek2319
@vivek2319 4 жыл бұрын
Thanks for this video, nicely explained.
@emmanuelgoldstein3682
@emmanuelgoldstein3682 8 ай бұрын
Hello, not sure if you will reply to this older video but at 7:01 you choose the function with the greatest amount of loss and say that it fits the data the best. Was this an error? If not, can you explain why?
@skyp4216
@skyp4216 4 ай бұрын
I beleive its because tge outliers affect is the least
@ethiopiansickness
@ethiopiansickness 3 жыл бұрын
Great video! But I really need help understanding something. I thought the purpose of calculating MSE, SSE etc. Was to get a sense of accurate your regression model is. 0:55 I noticed how when you introduced the outliers you said that the model tries to change to also incorporate the outliers. So does that also change the linear regression model too? I'm also confused because I thought linear regression (OLS) already computes the minimum of the sum of squared errors.
@codingwithnipu
@codingwithnipu Жыл бұрын
what tool you used for the visulization graph
@jonathanduran2921
@jonathanduran2921 3 жыл бұрын
One thing I never understood.. why are we calling the L2 error the sum of squares.. there’s a square root that is implemented on the sum of the squares.. doesn’t this offset the impact of outliers on the L2 norm?
@ankitjaiswal272
@ankitjaiswal272 2 жыл бұрын
Best Part: Check it out to lower your self esteem 😂😂
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Thank you for noticing the humor haha
@johnnysmith6876
@johnnysmith6876 3 жыл бұрын
NICE video 😀😀
@hariharans.j5246
@hariharans.j5246 5 жыл бұрын
Great as always man! Make videos about reward modelling and neural ODEs
@CodeEmporium
@CodeEmporium 5 жыл бұрын
Thanks homie. I'll check more on Neural ODEs
@menglin7432
@menglin7432 4 жыл бұрын
This is gold
@euclitian
@euclitian 5 жыл бұрын
do you use 3b1b's manim library to make the animations?
@CodeEmporium
@CodeEmporium 5 жыл бұрын
I only used manim for a gradient descent video I made some time ago. I used a lot of Jonathan Barron's animations here (description for more details)
@3washoka
@3washoka 4 жыл бұрын
Thank you for the video
@Slim-bob
@Slim-bob 2 жыл бұрын
sooooo goood!
@students4life821
@students4life821 4 жыл бұрын
Your video is great but the intro scares me :p Thanks, mate!
@EW-mb1ih
@EW-mb1ih 2 жыл бұрын
What is epsilon insensitive loss function?
@ankitganeshpurkar
@ankitganeshpurkar 4 жыл бұрын
good info
@CodeEmporium
@CodeEmporium 4 жыл бұрын
Thanks for watching
@nadiaterezon8281
@nadiaterezon8281 3 жыл бұрын
Sorry I feel confused... how do we even calculate the absolute loss and squared loss in the first place
@hardikvegad3508
@hardikvegad3508 3 жыл бұрын
One question how do we apply/implement adaptive loss? do we have to create a list of random values of alpha; like we do in randomizedsearchcv or grid search. Or is there any other way?.... Btw your explanation was amazing.
@arkasaha4412
@arkasaha4412 5 жыл бұрын
Nice video! How about doing a mathematical video related to eigenvalues/vectors in ML applications?
@DiogoSanti
@DiogoSanti 4 жыл бұрын
Great job!!!!
@alexandermass4552
@alexandermass4552 Жыл бұрын
As a physicist I call this a potential . think also mathematicians would call it as such. only people at Google and who think artificial intelligence has something to do with commercial industry do not understand that it was originally a matter of science.
@CodeEmporium
@CodeEmporium Жыл бұрын
True. Many industry practitioners are interested in “How can I used AI to help me” over “What is AI”
@gilbertcabigasalmazan3289
@gilbertcabigasalmazan3289 3 жыл бұрын
How do you visualize data points?
@davidmurphy563
@davidmurphy563 2 жыл бұрын
"Check it out if you want to lower your self-esteem" My self-esteem is outputted by a ReLU activation function, you can't touch it.
@david-vr1ty
@david-vr1ty 3 жыл бұрын
Great video. Is the code for the graphics somewhere available? I would like to use and adapt it for my purposes
@fernandogonazales1797
@fernandogonazales1797 3 жыл бұрын
great vid ty
@CodeEmporium
@CodeEmporium 3 жыл бұрын
Anytime!
@ehza
@ehza 4 жыл бұрын
nice man
@Usernotknown21
@Usernotknown21 3 жыл бұрын
Maybe start off with what is a loss function
@GaneshMohane
@GaneshMohane 5 ай бұрын
Good
@arsalan2780
@arsalan2780 4 жыл бұрын
can u recommend books as well from where you read this.. i know couple but it covers all topics of DL
@reynaldoquispe9335
@reynaldoquispe9335 4 жыл бұрын
good explains (Y)
@jameswood6445
@jameswood6445 4 жыл бұрын
10/10
@piqieme
@piqieme 4 жыл бұрын
i was forced to study about this by myself for my final project , even though im major in networking. ughhh.
@CodeEmporium
@CodeEmporium 4 жыл бұрын
I know. It sucks, right?
@piqieme
@piqieme 4 жыл бұрын
@@CodeEmporium it sure is. All my friends are working on IOT, and me? damn. Deep learning. haven't been able to sleep normally for this past few months learning everything from scratch.
@haimantidas843
@haimantidas843 4 ай бұрын
is this video for ghost only?
@haimantidas843
@haimantidas843 4 ай бұрын
😄
@revimfadli4666
@revimfadli4666 2 жыл бұрын
Is this loss?
@CodeEmporium
@CodeEmporium 2 жыл бұрын
Yea. Different types of loss functions you can optimize for when creating a model
@shmarvdogg69420
@shmarvdogg69420 4 жыл бұрын
why is this video second to siraj ravals? his is prolly a rip off of this one so really you should be on top
@vinayreddy8683
@vinayreddy8683 4 жыл бұрын
*why don't you upload 2-3 videos per week* just saying.
@sushantkarki2708
@sushantkarki2708 4 жыл бұрын
too much information to process at once
@arturr5
@arturr5 4 жыл бұрын
This didn't explain anything. in cross entropy you used a comparison to a cell tower but didn't translate it to machine learning at all. In a real model what are the 3 bits? what are the 5 bits??????
@menaxyzhou9479
@menaxyzhou9479 Жыл бұрын
lmao 6:03
@maltml
@maltml 5 жыл бұрын
Isn't adaptive loss kind of... dumb? Instead of solving a problem you redefine the problem until you get a satisfactory answer. And apart from 2D regression you have no way of knowing if model is doing what you want.
@johnheikens
@johnheikens 5 жыл бұрын
you just copied the video from jon Barron and added your own voice!
@CodeEmporium
@CodeEmporium 5 жыл бұрын
I used his animations for 2 of the losses and the comparison of 4 losses later in the video. Attributed him when I talk about his paper and the top of the description. Also, 4 of the 6 losses discussed were NOT in his video. He never talked about classification or the absolute loss.
@johnheikens
@johnheikens 5 жыл бұрын
@@CodeEmporium I hope you asked his permission (as this video has potential to replace his video). i see that you have a creative commons license on your video, while his video doesn't. This means that if someone uses your video, because he would think it is creative commons, he also gets a copystrike as soon as Jon Barron copystrikes your video. You cant just copy parts of another video, and tell people they are fine when they use it.(by cc license)
@thedailyepochs338
@thedailyepochs338 4 жыл бұрын
@@johnheikens hey john
@thedailyepochs338
@thedailyepochs338 4 жыл бұрын
@@johnheikens Shut up
@Youshisu
@Youshisu 3 жыл бұрын
half of your video is stolen xD
Optimizers - EXPLAINED!
7:23
CodeEmporium
Рет қаралды 125 М.
What is a Loss Function? Understanding How AI Models Learn
10:22
IBM Technology
Рет қаралды 10 М.
Chain Game Strong ⛓️
00:21
Anwar Jibawi
Рет қаралды 41 МЛН
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
Gradient Descent, Step-by-Step
23:54
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
Activation Functions - EXPLAINED!
10:05
CodeEmporium
Рет қаралды 127 М.
Loss Functions : Data Science Basics
16:40
ritvikmath
Рет қаралды 35 М.
Regression Analysis | Full Course 2025
1:09:13
DATAtab
Рет қаралды 15 М.
ROC and AUC, Clearly Explained!
16:17
StatQuest with Josh Starmer
Рет қаралды 1,6 МЛН
A Short Introduction to Entropy, Cross-Entropy and KL-Divergence
10:41
Aurélien Géron
Рет қаралды 360 М.
The Key Equation Behind Probability
26:24
Artem Kirsanov
Рет қаралды 168 М.
Gaussian Processes
23:47
Mutual Information
Рет қаралды 140 М.
Robust Regression with Huber Loss - Clearly Explained
9:28
Selva Prabhakaran (ML+)
Рет қаралды 3,6 М.
Cross Entropy Loss Error Function - ML for beginners!
11:15
Python Simplified
Рет қаралды 41 М.
Chain Game Strong ⛓️
00:21
Anwar Jibawi
Рет қаралды 41 МЛН