Machine Learning / Deep Learning Tutorials for Programmers playlist: kzbin.info/aero/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras Machine Learning / Deep Learning Tutorial playlist: kzbin.info/aero/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
@rey12426 жыл бұрын
I already asked it on another video, but just to cover the most area as possible Could I possibly normalize the weights to have mean 0 and variance 1 on weights initialization?
@coolbits22355 жыл бұрын
I am in debt to you for teaching me so much in one day. I would have kissed your hand in gratitude if you were in front of me. NN are such a convoluted mess but you have made things easier.
@Itsme-wt2gu Жыл бұрын
Can we make a game where ai have their own life and we live as their family and social system with our friends
@kareemjeiroudi19646 жыл бұрын
I'm deeply impressed by the quality of your videos. Allow me to say that these, by far, are the most helpful video tutorials on Neural Networks. I seriously appreciate the time you spend researching such information and then putting it in such a concise pleasant way, that's also easy to comprehend. Trust me without you, I wouldn't have been able to understand what changes these parameters make in the network. That's why, thank you very very much for both the time and the effort you put into this! And please, please, keep making more tutorials. Also, I'd like to remark that the topics of these videos are so sequential, so if you're following the playlist from the very beginning you'd absolutely be able to make sense of everything noted in the videos, regardless what your prior knowledge of Neural Networks is. Besides, the Keras playlist is also complementary and adds up a lot to the learning experience. All in all, this is - in one word - "professional work".
@deeplizard6 жыл бұрын
Wow kareem, thank you so much for leaving such a thoughtful comment! I'm very happy to hear the value you're getting from this series, and we're really glad to have you here!
@vdev67974 жыл бұрын
i don't allow you to say..!!
@WahranRai2 жыл бұрын
It was the purpose of this *deep learning* videos : to be *deeply* impressed by the *learning* you get
@dr.hafizurrahman93745 жыл бұрын
God Bless you, my dear Teacher. I saw in every lesson that you put the whole ocean in a small jar. This is the unique qualities and very few teachers have such good quality.
@deeplizard5 жыл бұрын
Thank you, Hafiz!
@PatriceFERLET4 жыл бұрын
Several days that I read several articles to understand what really does Batch Norm, and I found your video. Perferctly explained, thanks a lot !
@tamoorkhan32624 жыл бұрын
One of the few youtube series I have completed in my life. Instead of beating around the bushes, you kept it to the point with the tons of info just in few minutes. Hope to see more such series.
@woudjee2 Жыл бұрын
Literally watched all 38 videos in one go. Thank you so much!
@deepcodes4 жыл бұрын
Finally completed the series of deep learning, Thank You for such amazing videos and blogs for giving free on KZbin. It's great quality!!!
@tanfortyfive3 жыл бұрын
Top-notch, I finished it all, kudos to the Deeplizard team, love you all, love you Mandy, your sweet voice keep up us.
@linknero14 жыл бұрын
Thanks, I'm writing my thesis thanks to your explanations!
@FernandoWittmann5 жыл бұрын
Great video! But from my understanding, only g and b are trainable. In 4:23, it is mentioned that the mean and std are parameters as well ("these four parameters ... are all trainable")
@deeplizard5 жыл бұрын
Thanks Fernando, you’re right! The blog for this video has the correction :) deeplizard.com/learn/video/dXB-KQYkzNU
@davidireland7244 жыл бұрын
came looking for this comment! thanks for stopping me losing my mind trying to reconcile this explanation to the paper
@nasiksami23513 жыл бұрын
THANK YOU SO MUCH FOR THIS AMAZING PLAYLIST! One of the best channels for learning deep learning. Absolutely loved your content. It was explained in the easiest possible way and awesome graphical illustrations. You really worked hard on the editing! Thanks again!
@smithflores69682 жыл бұрын
I found, pure gold ... ! Great video ! I understood perfectly !
@robinkerstens5163 жыл бұрын
just like all other comments: i have just finished your video series and I am impressed by the quality of explanation. Many videos go into tiny details way to fast, before making sure that everyone at least understands the terms. Kudos! I hope you make many more.
@deeplizard3 жыл бұрын
Thank you Robin! Much more content available on deeplizard.com :)
@JoeSmith-kn5wo Жыл бұрын
Great playlist!! I went through the entire Deep Learning playlist, and have to say it's probably one of the best at explaining deep learning in a simplistic way. Thanks for sharing your knowledge!! 👍
@PritishMishra3 жыл бұрын
Hurray, Completed the series (The only series on KZbin which I have seen from the first video to last without skipping a second). Amazing job Deep Lizard Team. Highly Appreciated! Now I am going to see the Keras Playlist and den I will see the Pytorch series and den Reinforcement learning
@deeplizard3 жыл бұрын
Congratulations! 🎉 Keep up the great work as you progress to the next courses!
@parthbhardwaj22624 жыл бұрын
I am really fascinated by your hard work that bring such quality to your videos ! I would be really happy if you could make as much more stuff as possible. Channels like yours only keep up the spirits of students like us really high! Just one word to sum it up....... OUTSTANDING !!
@stwk82 жыл бұрын
Thank you Deeplizard!. The playlist of Machine Learning & Deep Learning Fundamentals made me understanding the concepts of ML super easily. Thank you so much :D
@yelchinyang1485 жыл бұрын
The online tutorial is very useful and helps me understand in detail batch normalization concept, which has confused me for a long time. Thanks very much for your sharing.
@deeplizard5 жыл бұрын
You are welcome!
@OKJazzBro Жыл бұрын
Batch norm according the paper is actually applied before the activation function, not after. For this reason, they even recommend dropping the bias parameter of the layer itself because batch norm comes with a learnable bias term. The output of batch norm then goes to the activation function.
@aniketbhatia11635 жыл бұрын
These tutorial videos are one of the best ones I could find. The explanations are extremely lucid and so easy to understand. I really hope you expand your pool of videos to include other topics such as RNNs. You could also dedicate some videos to hyper-plane classifiers, SVMs, RL, even some optimization methods. All in all the set of videos is just amazing!
@ranitbarman6471 Жыл бұрын
Cleared the concept. Thnx
@lingjiefeng31965 жыл бұрын
I love your tutorial. The illustration is just so concise and easy to understand. Thank you for all your effort in making these videos!
@baqirhusain56522 жыл бұрын
Beautiful !! super clear !
@hamzawi27524 жыл бұрын
Very Excellent, I hope you continue this series. Your explanation is so clear.
@al-farabinagashbayev54034 жыл бұрын
I think every machine learning specialist even specialized one will find in you course something new for himself.:) Great course, Thanks a lot!
@senduranravikumar35543 жыл бұрын
Thank you so much mandy... i have gone through all the videos... 😍😍😍 .
@silentai38263 жыл бұрын
Wow, this is awesome. Kudos to you! Perfect explanation. Was trying to understand batchnorm from some websites and articles, this was much better than any of them. Thanks!
@qusayhamad72433 жыл бұрын
thank you really you are the best teacher in the world. I appreciate your efforts
@deeplizard3 жыл бұрын
Happy to hear the value you're getting from the content, qusay!
@qusayhamad72433 жыл бұрын
@@deeplizard I am so happy for your reply to my comment ^_^
@HasanKarakus Жыл бұрын
The best explonation I ever watch
@sciences_rainbow62923 жыл бұрын
i completed thes series of this videos, can't wait to watch more on your playlist!
@deeplizard3 жыл бұрын
Awesome job! See all of our deep learning content on deeplizard.com :)
@karelhof3195 жыл бұрын
finding this channel has been a great help for my studies!
@khalilturki81873 жыл бұрын
nice short video and great way of explaining! I will follow this channel and watch more videos! Keep up the great work
@rowidaalharbi68613 жыл бұрын
Thank you so much for your explanations!. I'm writing my phd thesis and your tutorial helped me a lot :)
@punitdesai47793 жыл бұрын
Very well explained!
@ahmadnurokhim41682 жыл бұрын
Great quality content, subscribed ️🔥
@pallavbakshi6126 жыл бұрын
Wow, thanks for putting this up. You deserve every like and every subscribe. Great job.
@jonathanmeyer48426 жыл бұрын
Nice tutorial, clear, professional voice and animations ! Looking forward more deep learning videos :) (I'm aware of your Keras tutorial series and I'm going to watch it right now !)
@deeplizard6 жыл бұрын
Thank you, Jonathan! I'm glad you're liking the videos so far!
@yashgupta4174 жыл бұрын
Very well explained
@gurpriyakaur21093 жыл бұрын
Amazing explanation!
@ericdu6576 Жыл бұрын
AMAZING SERIES
@marioandresheviacavieres19232 жыл бұрын
I'm deeply thankful 🤓
@jerseymec5 жыл бұрын
Thanks for the amazing series! I really enjoyed your videos! Keep up the good work! Hope to see more complex networks made simple by you!
@ejkitchen3 жыл бұрын
Great content. Like many others have said, one of the best series on ML out there.
@robertc63433 жыл бұрын
Ohhh what a wonderful narrative. I really like the way you explained it. Thank you and I’ve just Subscribed to your channel👍🏻
@alphadiallo93243 жыл бұрын
That was very helpful, thanks
@julianarotsen65215 жыл бұрын
Thanks for the amazing explanation!! By far the best tutorial video I've seen!
@aravindvenkateswaran52943 жыл бұрын
I have successfully binged (across 2 weeks) this playlist and found them really helpful! Thank you for all you do and keep up the good work. Hope to watch more vids getting added here or elsewhere on the channel. Lots of love:)
@deeplizard3 жыл бұрын
Thank you, and great work! Check out the homepage of deeplizard.com to see all other DL courses and the order in which to take them after this one!
@richarda16303 жыл бұрын
Just wanted to say kudos and thanks so much for your awesome series :D I have learned so much! Now I'm off to your Keras w/TF series :)
@deeplizard3 жыл бұрын
Great job getting through this course!
@richarda16303 жыл бұрын
@@deeplizard Thanks! moving to your Deep Learning and Keras series next :)
@shaelanderchauhan19632 жыл бұрын
Just WoW! Amazing content. Please make series on Explainig research papers
@simonbernard42165 жыл бұрын
just woaaa ..! Please keep making these videos, it's by far the best explanation I got here
@Yadunandankini6 жыл бұрын
great video. precise and concise. Thanks!
@arohawrami81329 ай бұрын
Thanks a lot.
@jacky24764 күн бұрын
your video is very clear! Thx
@from-chimp-to-champ2 жыл бұрын
As always, very well done and clear, thank you!!
@gaurav_gandhi5 жыл бұрын
Clearly explained, good animation, covered most areas. Thanks
@orcuncetintas22584 жыл бұрын
Great video, very clear and understandable. However, I want to point out some mistakes. In the batch norm, only b and g are trainable; not the m and s. Moreover, batch norm is applied after fully connected/convolutional layers but before activation functions. Therefore, it doesn't normalize the output from activation function; it normalizes the input to the activation function.
@anshumaandash1374 жыл бұрын
Nice Explanation. However, there is a small mistake you can correct. We batch normalize the outputs of a layer (Conv or Linear) before squashing it through the activation function. That way, the activations never overshoot or understood, leading to a stable output and easier convergence. This also allows us to use bigger learning rates. I hope that helps..
@karatugba6 ай бұрын
I am sorry that this is the last video in the playlist I want more😢
@fanusgebrehiwet62864 жыл бұрын
gentle and to the point. Thank you.
@travel_with_rahullanannd4 жыл бұрын
I really enjoyed learning with your videos. Can you please create videos on RNN.!!
@entertainment80672 жыл бұрын
I watched this complete playlist of deep learning. It was totally very amazing. but my suggestion is that please add some video about RNN and also make a separate video playlist about Supervised learning, Unsupervised Learning, Imitation Learning and Deep Reinforcement Learning. Thanks you mam.
@deeplizard2 жыл бұрын
You're welcome, I'm glad you enjoyed it! We have some of the topics you've suggested already available in other courses. Check them out here: deeplizard.com/courses
@amirraad4437 Жыл бұрын
Thank you so much for your great work ❤
@UtaShirokage4 жыл бұрын
Amazing and concise video, thank you!
@roxanamoghbel91473 жыл бұрын
so helpful!
@prasaddalvi30174 жыл бұрын
These are really good set of videos for neural network. I really liked it a lot and enjoyed watching it. Great work. But there is just one thing which I would like to suggest, you guys have explained Back propagation really well, better than most that I have seen, but it would be really helpful in understanding back propagation better if you could add a small numerical problem for back propagation calculation.
@pranav-24-f9t4 жыл бұрын
awesome...I am going to watch the playlist....
@nikhillahoti76285 жыл бұрын
This is a gem! Thank you very much!!!
@diogo96105 жыл бұрын
Wonderful work. Thank you for setting up all this content.
@mukulverma84044 жыл бұрын
Very good Explanation..watched this whole playlist.Thanks for making understanding DL so easy and fun.Moreover your funny stuff made me laugh.
@yuriihalychanskyi87644 жыл бұрын
Thanks for the video. So do we have to normalize data before putting to the model or batch normalization does it itself in the model?
@ogsconnect13125 жыл бұрын
Thanks
@mkulkhanna5 жыл бұрын
Very nice tutorial, thank you
@akhtarzaman78646 жыл бұрын
thankyou for amazing explanation
@mustafacannacak92793 жыл бұрын
Love your channel
@FuryOnStage6 жыл бұрын
this was an amazing explanation. thank you.
@deeplizard6 жыл бұрын
Thanks, Nika!
@rapunziao29296 жыл бұрын
i started to fall in love with the voice
@srighakollapuajith40153 жыл бұрын
very nice video
@lucaslucassino3 жыл бұрын
Hey I have a question! It is sometimes preferred to have a batchnorm layer after a convolutional layer and after the activation layer. Does anyone know why?
@bl73954 жыл бұрын
@deeplizard please do a series on transfer learning, or more in-depth teaching on NLP/CV :)
@Hi-zlv3 жыл бұрын
I finished all 38 videos. Great great great explanation! Can you also do some sample projects?
@deeplizard3 жыл бұрын
Great job finishing the course, Zehra! Many projects are included in our other various deep learning courses. Check out all the courses listed on the home page of deeplizard.com. We give the recommended order for which to take the courses there as well.
@Hi-zlv3 жыл бұрын
@@deeplizard Sure! I will check the website. I also recommend it to my friends. Thank you, Mandy!
@shamikbanerjee47134 жыл бұрын
Thank you Ma'am. :)
@oriabnu14 жыл бұрын
I have seen your all videos,i am Ph.D. student truly learn many things from you ,if you have time please teach how can variational autoencoder use in CNN
@peteabc16 жыл бұрын
Ahh, explained in human language..thank you :). What I don't understand is where to insert those layers? Intuition tells me just everywhere, right? Btw the scale problem and such equations are called stiff equations (the NN is an equation solved by using numerical methods). But another problem is denormalization with numbers close to 0, that can cause 200-300 times slowdowns even with modern CPUs.
@deeplizard6 жыл бұрын
Thanks, peteabc1! Yeah, you would want to insert batch norm after your "typical" layers, like dense, conv, etc. that are followed by an activation. From my experience, determining when/where to add batch norm involves testing and analyzing my training results after adding or removing more batch norm layers. But yes, you certainly can add a batch norm layer after _all_ of these typical layers and observe how your model performs. Also, thanks for the stiff equations info!
@anirudhgangadhar6158 Жыл бұрын
Could you intuitively explain how having scale differences between features could lead to the “exploding gradient” problem ? That wasn’t clear to me ..
@sanaullahaq24223 жыл бұрын
{ "question": "What kind of parameters are g and b?", "choices": [ "Learnable Parameters", "Hyperparameters", "g learnable and b hyperparameter", "g hyperparameter and b learnable " ], "answer": "Learnable Parameters", "creator": "Sanaulla Haq", "creationDate": "2021-07-19T09:27:20.333Z" }
@florianhofstetter68593 жыл бұрын
Why do we standardize the data via a batch and not for every single record?
@ArgumentumAdHominem8 ай бұрын
Could you please clarify why would one normalize a RELU layer, as shown in the example? In the discussion you suggest that it is to prevent cascading effects due to too large weights. However, later, you state that BN normalizes the output of the unit. How are the two related? Further, why normalize a RELU unit that is already bound to [0, 1]?
@Iamine19813 жыл бұрын
Parameters m and s are not learned in the classical sense via SGD, but rather are estimated from the mini-batch sample population. The other two parameters scale and offset are learnable.
@deeplizard3 жыл бұрын
You're right, this has been corrected in the corresponding blog for this episode: deeplizard.com/learn/video/dXB-KQYkzNU
@RandomShowerThoughts6 жыл бұрын
This video was amazing
@ismailelabbassi715010 ай бұрын
Great course, but i did not understand when u said that axis = 1 means that features should be normalized. Because Batch norm is to normalize the output of the activation function in a layer !!!!??? Help please
@donfeto76362 жыл бұрын
thanks for your video I was wondering We normalize the Z which is Z=W*b, not the A which is The output of the activation function and the output of the perceptron also
@kartikpodugu Жыл бұрын
In the example you mentioned, about miles driven in 5 years. Why did you mention that the data isn't necessarily on the same scale? I didn't get that. Can you elaborate? 1:48
@saluk74194 жыл бұрын
I did not understand what the axis argument is doing at all. How is it related to the features? What axis is it referring to? Can someone explain in a little more detail?
@user-qt3jo9tw6m5 жыл бұрын
Good stuff, thank you
@ismailelabbassi715010 ай бұрын
Another question ma'am u said that this calculation with the two arbitary parameters g and b sets a new std and mean for the data. can u provide me with a little explaination for those two parameters and how they set a new mean and std for our data??? thank u in advance
@7369393 жыл бұрын
4:26 I think only "g" and "b" are trainable, but not the standart deviation or the mean that are calculated.
@deeplizard3 жыл бұрын
Thanks, you're right. It's corrected in the corresponding blog: deeplizard.com/learn/video/dXB-KQYkzNU
@mhdalkadri92286 жыл бұрын
Brilliant !!
@RH-mk3rp Жыл бұрын
The batch norm paper has the normalization step right before nonlinearity. Why is it done after relu here?
@roger_is_red4 жыл бұрын
so what is axis =1 I couldn't understand what you said? thanks
@heejuneAhn6 жыл бұрын
I have a question on the slide around 4:00. Why do we need multiple and some parameter value after normalizing the value? The step will transform the value range. In term of the original papers, they say identify transform. In fact I wonder why we use 'identiy transform' which essentially makes no chnage to the input.