I swear this playlist is one of the best resources I have ever seen on these topics. Great explanation. Please continue to upload more of this great content. Much thanks for your time and outstanding effort.
@AssemblyAI3 жыл бұрын
That's great! Glad to hear you liked it!
@harshitvijay1978 ай бұрын
damn this whole series is like a gold mine ... i was suspicious of how a so well know topic be covered in so less time ... the videos might be not good but happy to be proven the wrong. THESE ARE GOLD ... thank you @AssemblyAI & thank you very much Ma'am for helping.
@shanelenagh2 ай бұрын
Very comprehensive and efficient survey of regularization. This brought together items I have seen in NVidia training and other training in a very organized fashion. I don't comment on YT videos often, but this one was worth it--well done.
@lex879916 күн бұрын
Fantastic and very clear explanation- thank you !
@TimUnknown-h5q2 жыл бұрын
Just wanted to say that these videos are really well done and the speaker really knows what she's talking about. Iam doing my PhD right now in mechanical engineering, using deep learning for modeling a production process (steel) and your videos really helped me to get a much better grips on what to tune and do with my model. Highly appreciated, thx a lot :)!
@AssemblyAI2 жыл бұрын
You are very welcome Tim and thank you for the support! - Mısra
@AlexKashie Жыл бұрын
Wow so useful, thank you for the amazing content. Your can feel the confidence of the lecturer and her explanations are very clear. Watching all the playlist
@mmacaulay2 жыл бұрын
Another absolutely fantastic, accessible teaching resource on a complex machine learning concept. I don't think there are any resources out there that can match the quality, accessibility and clarity this resource provides.
@MrPioneer77 ай бұрын
Overfitting is frequently happening in my programs. I tried reducing the number of input parameters but I know it is not a good solution. I was familiar with L1 and L2 regularisation. This tutorial helped me better understanding of them and other common methods. I tried to decrease both the train and test errors but I was not successful by the use of regularisation. I hope to do it soon 🙂 Thanks for your illustrative explanations
@Eyetrauma Жыл бұрын
Thank you for this explanation. Like many I’d imagine, I’ve bumped into these concepts predominately via my use of SD. It’s nice having an overview of what’s being conveyed so I can understand what’s happening without getting too bogged down in the minutiae.
@radethebookreader5312 Жыл бұрын
I'm a Research scholar from India, your videos are just awesome 👍
@syedmohammed786y4 ай бұрын
Your explanation is just amazing! ...
@rohitkulkarni5590 Жыл бұрын
This is amazing series with concepts well explained. Lot of the other videos dwelve a lot on mathematical formulas without explaining the concepts.
@bellion166 Жыл бұрын
Thank you! This gave a good intro b4 I started reading Ian Goodfellow.
@AlexXPandian9 ай бұрын
Really to the point and excellently delivered.
@ferdaozdemir Жыл бұрын
I liked this video very much. You explained all these techniques very well in my opinion.. Thank you..
@jacobyoung68762 жыл бұрын
Great job. The explanation is very clear and easy to understand.
@AssemblyAI2 жыл бұрын
Thank you Jacob!
@user-yn7jg5to3uАй бұрын
Thanks a lot for the great explanation!
@Ali-Aljufairi8 ай бұрын
Your videos are so good keep up the good work I have read and watched a lot of content explain it yours is the best
@arminkashani56952 жыл бұрын
Brief yet very clear and informative. Thank you.
@AssemblyAI2 жыл бұрын
You are very welcome Armin! - Mısra
@HootyHoo-o1s28 күн бұрын
Wow. Thank you so much for this!
@jabessomane7282 Жыл бұрын
This is very, very helpful. Great explanation Thank you.
@FirstLast-tx7cw Жыл бұрын
I had tears in my eyes. absolute gem of a video.
@malithdesilva77992 жыл бұрын
Great playlist, the contents are on the spot to each topic in a minimum time. Please keep outstanding work.🤘and thanks for the content.
@AssemblyAI2 жыл бұрын
You are very welcome! - Mısra
@mrbroos28432 жыл бұрын
the best video with a clear explanation
@AssemblyAI2 жыл бұрын
Glad to hear it!
@igeolumuyiwab.79805 ай бұрын
I love your teaching. Keep it up
@adarsh76042 жыл бұрын
Brief, Concise and Precise.
@AssemblyAI2 жыл бұрын
Thank you!
@aryanmalewar77892 жыл бұрын
Very clear and precise explanation. Thanks :)
@AssemblyAI2 жыл бұрын
You're welcome :)
@fatmacoskun6406Ай бұрын
Great content! Subscribed.
@PurtiRS Жыл бұрын
So Good, SO Good, Oh My God! Thank you soooo much!
@deependu__ Жыл бұрын
Thanks for the clear explanation.
@annajohn7890 Жыл бұрын
Absolutely clear explanation
@AssemblyAI Жыл бұрын
Glad it was helpful!
@narendrapratapsinghparmar91 Жыл бұрын
Thanks for this informative lecture
@skhapijulhossen6499 Жыл бұрын
This playlist is treasure for me.
@AssemblyAI Жыл бұрын
Awesome!
@akshaygs404811 ай бұрын
Amazing video. Very good content.
@qandos-nour Жыл бұрын
Wow, very clear thanks you helped me
@rexshamАй бұрын
It proves Beauty and Wisdom can coexist.
@thecarradioandarongtong46318 ай бұрын
Beauty with Brains 💐
@swapnildoke877710 ай бұрын
so nice and simple
@marijatosic2172 жыл бұрын
Amazing! 😍😍
@EmanAlsayoud4 ай бұрын
wow, thank you!!
@Danielamir19982 жыл бұрын
Brilliant video
@AssemblyAI2 жыл бұрын
Thank you!
@anb99993 ай бұрын
I don't understand why we are multiplying inputs by the keep probability, are we not using the coefficients from training? Would they not be subsequently dropped in testing as well?
@its_me_hb Жыл бұрын
I really loved it
@AssemblyAI Жыл бұрын
Awesome :)
@ryanoconnor1603 жыл бұрын
Great content!
@AssemblyAI3 жыл бұрын
Thanks Ryan!
@mazenhider8311Ай бұрын
You fantastic thanks
@deltamico Жыл бұрын
With L2 regularisation, do we add the weights^2 of the whole network to the final loss function, or while doing backprop and only the input weights of a node to the loss of that node?
@sourabhbhattacharyaa41372 жыл бұрын
Awesome stuff from thy side...Danke shen....can you give the link to the playlist containing these lectures.....
@AssemblyAI2 жыл бұрын
Here it is: kzbin.info/www/bejne/mpTGlZSaoZ5jrNU
@OrcaRiderTV2 жыл бұрын
What if input features have multiple dimensions ie age and height, can we still use batch norm as first layer to normalize thr input data ?
@kk008 Жыл бұрын
Can I use this L1 regularization to overcome the maximized mutual information problem?
@kuretaxyz3 жыл бұрын
Great video! Also you sound like you are from Turkey. Am I correct?
@AssemblyAI3 жыл бұрын
Yes, that is correct :)
@mallikarjunpidaparthi2 жыл бұрын
Thanks.
@AssemblyAI2 жыл бұрын
You're welcome!
@dwfischer Жыл бұрын
Great video. I’m currently making flash cards and this was a great resource.
@AssemblyAI Жыл бұрын
Glad it was helpful!
@techyink53449 ай бұрын
ty
@brahimmatougui11952 жыл бұрын
You said that L1 regularisation encourages weights to be 0.0 and this could lead to not considering some of outputs, is this the same behavior of dropouts?
@AssemblyAI2 жыл бұрын
It is a similar behaviour to dropouts, yes! Both L1 and dropout can make a network sparse (not all neurons are connected to all other neurons). The way they achieve it is still different though.
@brahimmatougui11952 жыл бұрын
@@AssemblyAI thank you so much for your prompt answer
@AssemblyAI2 жыл бұрын
@@brahimmatougui1195 You are very welcome! -Mısra
@emadbagheri2755 Жыл бұрын
great
@akashpatel1575 Жыл бұрын
you might have inclue batch size too
@phone2134 Жыл бұрын
Forget about Regularisation, I just came here to look thy beautiful lady❤❤❤
@abd_sam2 жыл бұрын
I didn't understand why do we need 'keep probability '.
@mugomuiruri231310 күн бұрын
@DK-ox7ze11 ай бұрын
I don't understand the purpose for regularization. The sole purpose of weights is to quantify the importance of a feature (strength of connection). So it's very much possible that one weight has much larger value than others because it's more important to the desired outcome in real life. But if you regularize. Then that weight loses its value and therefore might result in incorrect prediction.
@anozatix10228 ай бұрын
That's the point of regularization. It is used when your model is overfitting as stated in the video. If the model is performing decent without regularizers then you probably shouldn't use regularizers as this could result in underfitting.
@suryanshdey4773 Жыл бұрын
I don't understand why don't we simply reduce no of layers and neurons in a neural network to get rid of OVERfitting.
@michacz3230 Жыл бұрын
That's just one of the ways. You can also try to reduce size of the model as you said or try data augmentation
@donmiguel48489 ай бұрын
Variance is a mathematical term used in probability theory and stochastic for the measurement of data spreading around a mean. You are confusing your audience by abusing this term in a different manner when claiming that variance is the change rate of prediction per training data change. Don't do that !