'check it out when you want to lower your self esteem' damn that sentence caught me off guard
@BB-rh5bp4 жыл бұрын
Nice voice, tones, speed, and graphics! Perfect for studying anytime. Thanks!
@oscarsal4334 жыл бұрын
Wow impresive more value than dozens of study hours!!! Love it!
@CodeEmporium4 жыл бұрын
Im Glad this helps people like you :)
@MarcelloNesca3 жыл бұрын
This was an incredibly accessible video! You explained complex subjects simply enough to make me curious about going deeper in loss functions. Very awesome, thank you
@CodeEmporium3 жыл бұрын
Perfect! That's the intention :)
@kayingtang14933 жыл бұрын
Finally find a video that explains loss function clearly. Thanks for your effort!!
@CodeEmporium3 жыл бұрын
Glad it helps!
@bhargav74763 жыл бұрын
Welsch loss is what I needed for my problem, thanks
@geeky_explorer91053 жыл бұрын
Mannnnn! you nailed it, thanks a lot for this video, great informative lecture, keep making this contents and keep inspiring us
@mahaalabduljalil65965 ай бұрын
I enjoyed the video so much but honestly subscribed to keep the lights ON in your apartment :)
@hillosand3 жыл бұрын
Absolute loss does not 'ignore outliers altogether'. Outliers still have more effect on the absolute loss function than non-outliers do, it's just not as extreme an effect as MSE. The right-hand graphic at about 2:35 is wrong, the fitted line would be shifted much more upwards for low values of x.
@handokosupeno54252 жыл бұрын
Amazing explanation bro
@ShouseD3 жыл бұрын
Awesome explanation!
@GauravSharma-ui4yd5 жыл бұрын
Hey ajay this is gaurav, I always loved to watch your in depth videos. Please make indepth videos on various optimizers like adam, adagrad, ftrl and all other
@CodeEmporium5 жыл бұрын
Thanks! Thinking about that optimizer video.
@AhmedThahir2002 Жыл бұрын
How is Squared Loss equivalent to setting alpha=2 in the adaptive loss? Won't alpha=2 make the numerator as 2-2=0?
@1337Rinz Жыл бұрын
bro you are my hero 🥰
@emrek1 Жыл бұрын
You said pseudo-huber loss is differentiable at 3:00 but it is not continuous if alpha is not 1. It may not be differentiable even when alpha is 1, am I wrong?
@satyendrayadav31234 жыл бұрын
You are awesome man!!!
@templetonpeck10842 жыл бұрын
This video was the was freaking awesome!
@Manicscitzo3 жыл бұрын
Well explained! Good work
@CodeEmporium3 жыл бұрын
Thanks!
@gregorygreif21402 жыл бұрын
I love your animations! How did you make them?
@KaranKinariwala19953 жыл бұрын
Hey, firstly great explanation! Thanks! Secondly, I have a minor clarification. Can we then, in conclusion, say that finding the best loss function for a particular task also depends on the second derivative of the loss function? Since the final loss depends on the convex nature of the loss function.
@vivek23194 жыл бұрын
Thanks for this video, nicely explained.
@emmanuelgoldstein36828 ай бұрын
Hello, not sure if you will reply to this older video but at 7:01 you choose the function with the greatest amount of loss and say that it fits the data the best. Was this an error? If not, can you explain why?
@skyp42164 ай бұрын
I beleive its because tge outliers affect is the least
@ethiopiansickness3 жыл бұрын
Great video! But I really need help understanding something. I thought the purpose of calculating MSE, SSE etc. Was to get a sense of accurate your regression model is. 0:55 I noticed how when you introduced the outliers you said that the model tries to change to also incorporate the outliers. So does that also change the linear regression model too? I'm also confused because I thought linear regression (OLS) already computes the minimum of the sum of squared errors.
@codingwithnipu Жыл бұрын
what tool you used for the visulization graph
@jonathanduran29213 жыл бұрын
One thing I never understood.. why are we calling the L2 error the sum of squares.. there’s a square root that is implemented on the sum of the squares.. doesn’t this offset the impact of outliers on the L2 norm?
@ankitjaiswal2722 жыл бұрын
Best Part: Check it out to lower your self esteem 😂😂
@CodeEmporium2 жыл бұрын
Thank you for noticing the humor haha
@johnnysmith68763 жыл бұрын
NICE video 😀😀
@hariharans.j52465 жыл бұрын
Great as always man! Make videos about reward modelling and neural ODEs
@CodeEmporium5 жыл бұрын
Thanks homie. I'll check more on Neural ODEs
@menglin74324 жыл бұрын
This is gold
@euclitian5 жыл бұрын
do you use 3b1b's manim library to make the animations?
@CodeEmporium5 жыл бұрын
I only used manim for a gradient descent video I made some time ago. I used a lot of Jonathan Barron's animations here (description for more details)
@3washoka4 жыл бұрын
Thank you for the video
@Slim-bob2 жыл бұрын
sooooo goood!
@students4life8214 жыл бұрын
Your video is great but the intro scares me :p Thanks, mate!
@EW-mb1ih2 жыл бұрын
What is epsilon insensitive loss function?
@ankitganeshpurkar4 жыл бұрын
good info
@CodeEmporium4 жыл бұрын
Thanks for watching
@nadiaterezon82813 жыл бұрын
Sorry I feel confused... how do we even calculate the absolute loss and squared loss in the first place
@hardikvegad35083 жыл бұрын
One question how do we apply/implement adaptive loss? do we have to create a list of random values of alpha; like we do in randomizedsearchcv or grid search. Or is there any other way?.... Btw your explanation was amazing.
@arkasaha44125 жыл бұрын
Nice video! How about doing a mathematical video related to eigenvalues/vectors in ML applications?
@DiogoSanti4 жыл бұрын
Great job!!!!
@alexandermass4552 Жыл бұрын
As a physicist I call this a potential . think also mathematicians would call it as such. only people at Google and who think artificial intelligence has something to do with commercial industry do not understand that it was originally a matter of science.
@CodeEmporium Жыл бұрын
True. Many industry practitioners are interested in “How can I used AI to help me” over “What is AI”
@gilbertcabigasalmazan32893 жыл бұрын
How do you visualize data points?
@davidmurphy5632 жыл бұрын
"Check it out if you want to lower your self-esteem" My self-esteem is outputted by a ReLU activation function, you can't touch it.
@david-vr1ty3 жыл бұрын
Great video. Is the code for the graphics somewhere available? I would like to use and adapt it for my purposes
@fernandogonazales17973 жыл бұрын
great vid ty
@CodeEmporium3 жыл бұрын
Anytime!
@ehza4 жыл бұрын
nice man
@Usernotknown213 жыл бұрын
Maybe start off with what is a loss function
@GaneshMohane5 ай бұрын
Good
@arsalan27804 жыл бұрын
can u recommend books as well from where you read this.. i know couple but it covers all topics of DL
@reynaldoquispe93354 жыл бұрын
good explains (Y)
@jameswood64454 жыл бұрын
10/10
@piqieme4 жыл бұрын
i was forced to study about this by myself for my final project , even though im major in networking. ughhh.
@CodeEmporium4 жыл бұрын
I know. It sucks, right?
@piqieme4 жыл бұрын
@@CodeEmporium it sure is. All my friends are working on IOT, and me? damn. Deep learning. haven't been able to sleep normally for this past few months learning everything from scratch.
@haimantidas8434 ай бұрын
is this video for ghost only?
@haimantidas8434 ай бұрын
😄
@revimfadli46662 жыл бұрын
Is this loss?
@CodeEmporium2 жыл бұрын
Yea. Different types of loss functions you can optimize for when creating a model
@shmarvdogg694204 жыл бұрын
why is this video second to siraj ravals? his is prolly a rip off of this one so really you should be on top
@vinayreddy86834 жыл бұрын
*why don't you upload 2-3 videos per week* just saying.
@sushantkarki27084 жыл бұрын
too much information to process at once
@arturr54 жыл бұрын
This didn't explain anything. in cross entropy you used a comparison to a cell tower but didn't translate it to machine learning at all. In a real model what are the 3 bits? what are the 5 bits??????
@menaxyzhou9479 Жыл бұрын
lmao 6:03
@maltml5 жыл бұрын
Isn't adaptive loss kind of... dumb? Instead of solving a problem you redefine the problem until you get a satisfactory answer. And apart from 2D regression you have no way of knowing if model is doing what you want.
@johnheikens5 жыл бұрын
you just copied the video from jon Barron and added your own voice!
@CodeEmporium5 жыл бұрын
I used his animations for 2 of the losses and the comparison of 4 losses later in the video. Attributed him when I talk about his paper and the top of the description. Also, 4 of the 6 losses discussed were NOT in his video. He never talked about classification or the absolute loss.
@johnheikens5 жыл бұрын
@@CodeEmporium I hope you asked his permission (as this video has potential to replace his video). i see that you have a creative commons license on your video, while his video doesn't. This means that if someone uses your video, because he would think it is creative commons, he also gets a copystrike as soon as Jon Barron copystrikes your video. You cant just copy parts of another video, and tell people they are fine when they use it.(by cc license)