Loss Functions in Deep Learning | Deep Learning | CampusX

  Рет қаралды 64,232

CampusX

CampusX

Күн бұрын

In this video, we'll understand the concept of Loss Functions and their role in training neural networks. Join me for a straightforward explanation to grasp how these functions impact model performance.
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
👍If you find this video helpful, consider giving it a thumbs up and subscribing for more educational videos on data science!
💭Share your thoughts, experiences, or questions in the comments below. I love hearing from you!
✨ Hashtags✨
#DeepLearning #LossFunctions #NeuralNetworks #MachineLearning #AI #LearningBasics #SimplifiedLearning #modeltraining
⌚Time Stamps⌚
00:00 - Intro
01:09 - What is loss function?
11:08 - Loss functions in deep learning
14:20 - Loss function vs cost function
24:35 - Advantages/Disadvantages
59:13 - Outro

Пікірлер: 119
@AhmedAli-uj2js
@AhmedAli-uj2js Жыл бұрын
Your every word and every minute of sayings are worth a lot!
@anuradhabalasubramanian9845
@anuradhabalasubramanian9845 Жыл бұрын
Fantastic Explanation Sir ! Absolutely brilliant ! Way to go Sir ! Thank you so much for the crystal clear explanation
@kumarabhishek1064
@kumarabhishek1064 2 жыл бұрын
When will you cover RNN, encoder-decoder & transformers? Also, if you could make mini projects on these topics, it would be great. Keep doing this great work of knowledge sharing, hope your tribe grows more. 👍
@pratikghute2343
@pratikghute2343 Жыл бұрын
the only channel I have ever seen on youtube is underrated! best content seen so far....... Thanks a lot
@anitaga469
@anitaga469 Жыл бұрын
Good Content, great explanation and an exceptionally gifted teacher. Learning is truly made enjoyable by your videos. Thank you for your hard work and clear teaching Nitish Sir.
@abhaykumaramanofficial
@abhaykumaramanofficial 2 жыл бұрын
Great content for me....now everything about loss function is clear .......thank you
@HirokiKudoGT
@HirokiKudoGT Жыл бұрын
This is the best explaination about the whole basic of losses , all doubt are cleared thank you so much for this video.
@HARSHKUMAR-cm7ek
@HARSHKUMAR-cm7ek 8 ай бұрын
Your Content deleviery is truely outstanding sir . Although the numbers don,t justify with your teaching talent but let me tell i came here after seeing many of the paid courses and became a fond of ur teaching method .So, please don,t stop making such fabulous videos . I am pretty sure that this channel will be among top channels for ML and data science soon !!
@barunkaushik7015
@barunkaushik7015 Жыл бұрын
Such wonderful learning experience
@FaizKhan-zu2kp
@FaizKhan-zu2kp Жыл бұрын
please continue the "100 days of deep learning" sir its humble request to you. This playlist and this channel is best on this entire youtube for machine learner ❤❤❤❤
@abchpatel4745
@abchpatel4745 6 ай бұрын
Ony day this channel become most popular for Deep learning ❤️❤️
@AlAmin-xy5ff
@AlAmin-xy5ff Жыл бұрын
Sir, you are really amazing. I have learned lot of things from your KZbin channel.
@palmangal
@palmangal 10 ай бұрын
It was a great Explanation . Thank you so much for such amazing videos.
@paragvachhani4643
@paragvachhani4643 Жыл бұрын
My Morning begins with campusX...
@santoshpal8612
@santoshpal8612 5 ай бұрын
Gentlemen u r on right track
@CodeyourLife32
@CodeyourLife32 8 күн бұрын
same bro
@practicemail3227
@practicemail3227 Жыл бұрын
was able to understand each and every word, concept just because of you sir. Your teaching has brought me to this place where i can understand such concepts easily. Thank you very much sir. Really appreciate your hard work and passion. ❣🌼🌟
@talibdaryabi9434
@talibdaryabi9434 Жыл бұрын
I wanted this video and got it. Thank you.
@paruParu-rc1bu
@paruParu-rc1bu Жыл бұрын
With all respect....thank you very much ❤
@farhadkhan3893
@farhadkhan3893 Жыл бұрын
thank you for your hard work
@tanmayshinde7853
@tanmayshinde7853 2 жыл бұрын
These loss functions are the same as taught in machine learning. Difference in Huber, Binary and Categorical loss function.
@amLife07
@amLife07 24 күн бұрын
Thank you so much sir for another amazing lecture ❤😊
@KeyurPanchal0475
@KeyurPanchal0475 Күн бұрын
Simple and easy to understand.
@RanjitSingh-rq1qx
@RanjitSingh-rq1qx Жыл бұрын
Great work sir. Amazing 😍
@Keep_Laughfing
@Keep_Laughfing Жыл бұрын
Amazing sir 🙏🏻
@GamerBoy-ii4jc
@GamerBoy-ii4jc 2 жыл бұрын
hmesha ki trha kmaaaal Sir g
@IRFANSAMS
@IRFANSAMS 2 жыл бұрын
Awesome sir!
@jayantsharma2267
@jayantsharma2267 2 жыл бұрын
great content
@Sara-fp1zw
@Sara-fp1zw 2 жыл бұрын
Thank you!!!
@SumanPokhrel0
@SumanPokhrel0 Жыл бұрын
Beautiful explanation
@parikshitshahane6799
@parikshitshahane6799 4 ай бұрын
Very very excellent teaching skills you have Sir! Its like college senior explaining concept to me sitting in hostel room.
@sambitmohanty1758
@sambitmohanty1758 2 жыл бұрын
Great video sir as expected
@rashidsiddiqui4502
@rashidsiddiqui4502 4 ай бұрын
thank you so much sir, clear explaination
@uzairrehman5765
@uzairrehman5765 9 ай бұрын
Great content!
@Shisuiii69
@Shisuiii69 5 ай бұрын
Thanks for the timestamps It's really helpful
@rb4754
@rb4754 Ай бұрын
Mindboggling !!!!!!!!!!!!!!!!!!
@shantanuekhande4788
@shantanuekhande4788 2 жыл бұрын
great explanation. can you tell me why we need bias in NN . how it is useful
@narendraparmar1631
@narendraparmar1631 5 ай бұрын
Very well explained, Thanks
@uddhavsangle2219
@uddhavsangle2219 Жыл бұрын
nice explanation sir thank you so much
@mrityunjayupadhyay7332
@mrityunjayupadhyay7332 Жыл бұрын
Amazing
@faheemfbr9156
@faheemfbr9156 10 ай бұрын
Very well explained
@narendersingh6492
@narendersingh6492 2 ай бұрын
This is so very important
@safiullah353
@safiullah353 Жыл бұрын
How beautiful this is 🥰
@pavangoyal6840
@pavangoyal6840 2 жыл бұрын
Thank you
@ParthivShah
@ParthivShah 3 ай бұрын
Thank You Sir.
@rohansingh6329
@rohansingh6329 5 ай бұрын
awesome man just amazing ... ! ! !
@bhojpuridance3715
@bhojpuridance3715 10 ай бұрын
Thanxs sir
@Tusharchitrakar
@Tusharchitrakar 3 ай бұрын
Great lecture as usual. Just one small clarification: binary cross entropy has a convex (but not close formed) solution hence it only has a single global minima and no local minima. This can be proved using simple calculus by noticing the second derivatives and check if it is always greater than 0. Hence, you mentioned that there are multiple local minima which is not right. But thanks for your comprehensive material which is helping us learn such complex topics with ease!
@ShubhamSharma-gs9pt
@ShubhamSharma-gs9pt 2 жыл бұрын
this playlist is a 💎💎💎💎💎
@manashkumarbhadra6208
@manashkumarbhadra6208 2 ай бұрын
Great work
@amitmishra1303
@amitmishra1303 6 ай бұрын
Nowadays my morning and night end with your lecture sir😅.. thanks for putting so much effort.
@wahabmamond4368
@wahabmamond4368 8 ай бұрын
Learning DL and Hindi together, respect from Afghanistan Sir!
@sanchitdeepsingh9663
@sanchitdeepsingh9663 8 ай бұрын
thanks sir
@kindaeasy9797
@kindaeasy9797 Ай бұрын
amazing lectureeeeeeee
@zkhan2023
@zkhan2023 2 жыл бұрын
Thanks Sir
@hey.Sourin
@hey.Sourin 4 ай бұрын
Thank you sir 😁😊
@RajaKumar-yx1uj
@RajaKumar-yx1uj 2 жыл бұрын
Welcome Back Sir 🤟
@ANKUSHKUMAR-jr1pf
@ANKUSHKUMAR-jr1pf Жыл бұрын
at timestamp 44:40 --> sir, you told that binary corss entropy may have multiple minimal, but binary cross entropy is a convex function so it won't have multiple minima, i think.
@stoic_sapien1
@stoic_sapien1 Ай бұрын
44:52 Binary cross entropy loss is convex function,it will only have one local minima or only one global minima
@OguriRavindra
@OguriRavindra 4 ай бұрын
Hi. i think the in huber loss example plot @ 36:59, it is for clasification example rather than regression example. regression line should pass through the data points instead of seperating them.
@sumitprasad035
@sumitprasad035 Жыл бұрын
🦸‍♂Thank you Bhaiya ...
@techsavy5669
@techsavy5669 2 жыл бұрын
Great concise video. Loved it. A small question 💡: Sometimes we do drop='first', to remove that redundant first column during onehotencoding. So does that make a difference while using either of these categorical losses!?
@pratikghute2343
@pratikghute2343 Жыл бұрын
I think this might be happening automatically or not needed bcoz that way we could not get the loss for that category
@AmitUtkarsh99
@AmitUtkarsh99 9 ай бұрын
yes it affects the model because u should keep no. of parameters as less as possible for optimised model. but we dont always . it depends on variables or input. like 2 inputs can be interpreted by just one variable. 2^1. 3 variables require at least 2 variables but 2^2 is 4 so we can drop one column.
@aakiffpanjwani1089
@aakiffpanjwani1089 3 ай бұрын
can we use step function as the activation function for the last layer/ prediction node while doing classification problem using binary cross entropy? for 0 and 1 outputs?
@ManasNandMohan
@ManasNandMohan Ай бұрын
Awesome
@partharora6023
@partharora6023 2 жыл бұрын
sir carryon this series
@abhisheksinghyadav4970
@abhisheksinghyadav4970 Жыл бұрын
please share the white board @CampusX
@Avsjagannath
@Avsjagannath 6 ай бұрын
excellent teaching skill.sir plz provide notes pdf
@AkashBhandwalkar
@AkashBhandwalkar 2 жыл бұрын
Superb Video Sirr! Can you tell me which is the stylus that your using? And what is the name of the drawing/writing pad that you use. I want to buy one too
@campusx-official
@campusx-official 2 жыл бұрын
Galaxy tab s7+
@vinayakchhabra7208
@vinayakchhabra7208 Жыл бұрын
best
@74kumarrohit
@74kumarrohit 4 ай бұрын
Can you please create a videos for remainig Loss Function , for AutoEncoders, GANS, Transformers also. Thanks
@lakshya8532
@lakshya8532 Жыл бұрын
One disadvantage of MSE that, i can figure out if there are multiple local minima then there might be a case where MSE loss function can lead to a local minima instead of global minima
@lonehawk4096
@lonehawk4096 2 жыл бұрын
ML MICE SKLEARN video is still pending sir pleases make that video, other Playlist are also very helpfull thanks for all content.
@nxlamik1245
@nxlamik1245 5 ай бұрын
I am enjoying your video like a web series sir
@suriyab8143
@suriyab8143 Жыл бұрын
Sir, which tool are you using for explanation in this video
@kindaeasy9797
@kindaeasy9797 Ай бұрын
easyy thankssss
@tejassrivastava6971
@tejassrivastava6971 Жыл бұрын
Wouldn't Categorical and Sparse Entropy become same ? As after OHE, all log terms become zero except the current one which gives same result as from Sparse.
@user-fr9fg3rf2h
@user-fr9fg3rf2h 4 ай бұрын
At 21:06,[MEAN SQUARE ERROR] In order to calculate totel error by doing [y - y^] some value may be negative and can reduce the error {That we don't want} that is why we are doing square after doing substraction as you said. So here my doubt is that can we make that negative value to positive. then there is no need to do square. Please explain this. Thank you. :)
@vinayakchhabra7208
@vinayakchhabra7208 Жыл бұрын
maza aagya
@ashwinjain5566
@ashwinjain5566 11 ай бұрын
at 36:27, shouldnt the line be nearly perpendicular to what you drew? seems like a case of simpson's paradox.
@alokmishra5367
@alokmishra5367 7 ай бұрын
@ariondas7415
@ariondas7415 Ай бұрын
if the difference in (yi - y^i) is in decimals, then the loss value is diminished and not magnified, so maybe a novelty would be take this into account.
@alastormoody1282
@alastormoody1282 4 ай бұрын
Respect
@shashankshekharsingh9336
@shashankshekharsingh9336 2 ай бұрын
thank your sir for this great content. 13/05/24
@KiyotakaAyanokoji1
@KiyotakaAyanokoji1 10 ай бұрын
what is the difference : 1.) if we update the weights and bias on each row ,for all epoches , 2) for each batch (all rows togeather), for all epoches . can you tell senarios where one is better over other?
@shashankshekharsingh9336
@shashankshekharsingh9336 2 ай бұрын
+1
@KaranGupta-kv6wq
@KaranGupta-kv6wq 3 ай бұрын
can someone explain me how 0.3 0.6 0.1 is coming @ 52:37 I want to know how can I get these values and which formula is used
@sandipansarkar9211
@sandipansarkar9211 Жыл бұрын
finished watching
@anishkhatiwada2502
@anishkhatiwada2502 6 ай бұрын
please put timestamp for each topic in this video.
@user-xw1eu7jx7n
@user-xw1eu7jx7n 4 ай бұрын
grate
@kindaeasy9797
@kindaeasy9797 Ай бұрын
22:25 unit^2
@spyzvarun5478
@spyzvarun5478 Жыл бұрын
Isn't logloss convex?
@sam-mv6vj
@sam-mv6vj 2 жыл бұрын
Thank you sir for resuming
@praveendeena1493
@praveendeena1493 2 жыл бұрын
Hi sir I want complete end to end project video.please share me
@vishalpatil228
@vishalpatil228 7 ай бұрын
43.32 cost function = 1/n∑ ( loss function )
@ahmadtalhaansari4456
@ahmadtalhaansari4456 11 ай бұрын
Revising my concepts. August 04, 2023 😅
@8791692532
@8791692532 2 жыл бұрын
Why you stopped posting videos in this Playlist?
@campusx-official
@campusx-official 2 жыл бұрын
Creating the next one right now... Backpropogation
@8791692532
@8791692532 2 жыл бұрын
@@campusx-official please upload atleast one videos in 3-4 days to maintain continuity. by the way this playlist is going to be game changer for most learners, because comprehensive video content for Deep Learning is not available on youtube! Your method of teaching is very simple and understandable. Thank You for providing credible content!
@vikeshdas3909
@vikeshdas3909 2 жыл бұрын
Black bord achha tha
@Pipython
@Pipython 2 ай бұрын
aise explain karoge to like to karna padega na....
@amitbaderia6385
@amitbaderia6385 3 ай бұрын
Please take care of background noises
@Lucifer-wd7gh
@Lucifer-wd7gh 2 жыл бұрын
Time series in details 😓
@geekyprogrammer4831
@geekyprogrammer4831 2 жыл бұрын
Let him finish this series na. Why forcing like this???
@namanmodi7536
@namanmodi7536 2 жыл бұрын
@@geekyprogrammer4831 true brother
@assetss
@assetss Жыл бұрын
Birds ka voice aara background me
@vikeshdas3909
@vikeshdas3909 2 жыл бұрын
First viewer
@mrarul1
@mrarul1 4 ай бұрын
Avoid Hindi speaking in video
Backpropagation in Deep Learning | Part 1 | The What?
54:19
Gym belt !! 😂😂  @kauermtt
00:10
Tibo InShape
Рет қаралды 14 МЛН
Heartwarming moment as priest rescues ceremony with kindness #shorts
00:33
Fabiosa Best Lifehacks
Рет қаралды 38 МЛН
Slow motion boy #shorts by Tsuriki Show
00:14
Tsuriki Show
Рет қаралды 8 МЛН
What is Convolutional Neural Network (CNN) | CNN Intution
27:10
ETL Pipeline in MLOps | Data Management in MLOps | CampusX
32:46
Gym belt !! 😂😂  @kauermtt
00:10
Tibo InShape
Рет қаралды 14 МЛН