Deep Learning-Activation Functions-Elu, PRelu,Softmax,Swish And Softplus

  Рет қаралды 115,542

Krish Naik

Krish Naik

Күн бұрын

Пікірлер: 107
@sukumarroychowdhury4122
@sukumarroychowdhury4122 4 жыл бұрын
Dear Krish: we all love you, your energy, your enthusiasm. One point about derivative of ReLU activation Function at zero. To express it properly, derivative of ReLU as x tends to zero does not exist because the derivative is a step function and at zero, it is discontinuous. And limit approaching from left is not equal to limit approaching from right. The ReLU function is continuous, not-bounded and not zero-centered. At x = 0, the left hand derivative of ReLU is zero while right hand derivative is 1. Since the left hand derivative and the right hand derivative are not equal at x =0, ReLU function is not differentiable at x = 0. Derivative of Leaky ReLU at zero is still discontinuous. Hence, it is still not differentiable at zero. Generally, in Geophysics, we use Leaky ReLU as follows: f(x) = 0.1*x if x0 Else, I am your Bhakt and I want to start your (iNeuron) - 3 courses as soon as I reach India. Master Machine Learning Masters Deep Learning Masters NLP. Congratulations for your new position as CTO at iNeuron. Cheers, Roy
@ratulghosh3849
@ratulghosh3849 4 жыл бұрын
Thank you sir you really made the concepts related to different activation functions so clear.
@satwindersingh1797
@satwindersingh1797 Жыл бұрын
U r ultra best teacher Sir .. Ultimate and even better...
@Sudeepdas313
@Sudeepdas313 4 жыл бұрын
Thank you so much sir for taking out time and effort to put out such great content!
@rahuldey6369
@rahuldey6369 4 жыл бұрын
33:58 just a question- Do we have mutually exclusive results when apply sigmoid like softmax? Because the 60% 40% scenario will be there when we have mutually exclusive results where total probability sums to zero as you've mentioned
@raghav_birla
@raghav_birla 2 жыл бұрын
Bro he actually said wrong in the video, sigmoid function doesn't sum up to 1 and also binary classification is that in which we only have one output node in the output layer, not 2 that he described
@raghav_birla
@raghav_birla 2 жыл бұрын
And that's the only reason why we can allow sigmoid to be used in the output layer, because it calculates or say squashes the x between 0 and 1 independent of other x values that we have in output layers
@nguyenngocly1484
@nguyenngocly1484 4 жыл бұрын
f(x)=x is connect. f(x)=0 is disconnect. ReLU is then a switch. A ReLU neural net is a switched system of dot products. Fast transforms like the FFT and fast Walsh Hadamard transform are fixed systems of dot products that you are free to mix in.
@pritamH
@pritamH 3 жыл бұрын
Make theoretical video on white board..because most of the people familiar with that..Than you krish..big fan of you🖤🖤
@hariharans9408
@hariharans9408 2 жыл бұрын
Such a Amazing sir, Please put a link for this notebook in the discription that will help us to revise more about this
@arpanghosh3801
@arpanghosh3801 4 жыл бұрын
can you share the github link for the code
@DeepakSaini-sg3pq
@DeepakSaini-sg3pq 4 жыл бұрын
Thank you sir for making this video really very helpful and sir can you please provide us this notebook.
@kr2ik
@kr2ik 3 жыл бұрын
@Alden Lewis 100% working
@anandhiselvi3174
@anandhiselvi3174 4 жыл бұрын
Please do video on svm kernels
@rohanyewale3184
@rohanyewale3184 4 жыл бұрын
Really very Helpful !!! Can I get this notebook?
@the-ghost-in-the-machine1108
@the-ghost-in-the-machine1108 Жыл бұрын
27:45 No, the Swish activation function is not zero-centered
@kavuluridattasriharsha
@kavuluridattasriharsha 3 жыл бұрын
Hi Krish Can you please provide the Activation functions notebook for our reference.
@imranriaz9752
@imranriaz9752 4 жыл бұрын
I am very thankful to you dear sir for uploading such nice and well explained videos. Engr. Imran Riaz Lecturer Department of Electrical Engineering MUST Mirpur A.K. Pakistan
@chitramethwani4758
@chitramethwani4758 2 жыл бұрын
hi Krish.. can you pls provide the link for this notebook? Great content and nice explanation.. :)
@randhirpratapsingh9795
@randhirpratapsingh9795 4 жыл бұрын
Hi Krish.... Well explained... Could you please help me with the Jupyter notebook for this activation functions script....
@hashimhafeez21
@hashimhafeez21 3 жыл бұрын
thank you brother for such a nice explanation.
@VisheshPanchal-c6j
@VisheshPanchal-c6j 15 күн бұрын
Dear Krish: Can you create playlist or course for pytorch and tesnrflow and jax combine. In that you will overt all concept like in D2l book.
@arijitmukherjee8293
@arijitmukherjee8293 2 жыл бұрын
Hi Krish, thanks for the lovely explanation. I have one question. Why does zero centered data converge faster. Can anyone explain this?
@techsavy5669
@techsavy5669 3 жыл бұрын
How can i get the ipynb file that you described here ! Thank you.
@abhinaykumar8875
@abhinaykumar8875 4 жыл бұрын
Very informative video sir Please can you share the link to notebook
@dr.ratnapatil9272
@dr.ratnapatil9272 3 жыл бұрын
Wonderful session.
@anjubhagat8257
@anjubhagat8257 Жыл бұрын
Sir, you have explained very well. Thank you. I have the data from the year 1987-88 to 2021-22 of mango production and I want to apply ANN. so please give me code for Rstudio and also tell me which activation function, learning rate and how many hidden layers are best for the problem.
@rafibasha1840
@rafibasha1840 3 жыл бұрын
@15:00 when we multiply negative value with .01 how it became positive value
@shreesapkota
@shreesapkota 3 жыл бұрын
Good explanation. Thank you.
@dr.pushpalathamanagement1276
@dr.pushpalathamanagement1276 4 жыл бұрын
very clearly explained .
@vipnirala
@vipnirala 2 жыл бұрын
Great content. Thank you.
@shubhibansal7016
@shubhibansal7016 2 жыл бұрын
Hi, where can we find jupyter notebooks or the notes for the videos?
@Dragon48OO
@Dragon48OO 3 жыл бұрын
Naik Sir can you make fully explained vedio on YOLO algorithm with its working program
@nagamanid8926
@nagamanid8926 3 жыл бұрын
Thank you Sir. My question is-Can we use both SVM and Softmax simultaneously in CNN for classification
@meedoremee9622
@meedoremee9622 4 жыл бұрын
Can you download your machine learning that you used in your teaching?
@rafibasha1840
@rafibasha1840 2 жыл бұрын
Hi Krish,please share the notebook you are using
@saurabhtripathi62
@saurabhtripathi62 4 жыл бұрын
thanks for updating.
@borgavejs4379
@borgavejs4379 Жыл бұрын
Thanks Krish, Can you please share this document ?
@koustavdutta5317
@koustavdutta5317 3 жыл бұрын
@Krish Naik, sir please kindly share the notebook. Its very much required for self revision and from notes point of view.
@arjyabasu1311
@arjyabasu1311 4 жыл бұрын
Sir please upload the notebook !!
@AbhishekMishra-nl6by
@AbhishekMishra-nl6by 3 жыл бұрын
from which repository I can find this file in your GITHUB
@balasaraswathiyugandher3176
@balasaraswathiyugandher3176 3 жыл бұрын
16:35 ELU
@hareeshr3979
@hareeshr3979 3 жыл бұрын
Saved my time thanks
@bcinerd
@bcinerd 2 жыл бұрын
could you please upload this notebook in video description?
@devmaharaj1
@devmaharaj1 3 жыл бұрын
Super , Liked a LOTTTTTTTTTT !!!!!!!!!!!
@MrKB_SSJ2
@MrKB_SSJ2 2 жыл бұрын
Thanks a lot 😊
@vimalgupta86
@vimalgupta86 2 жыл бұрын
Hi Krish, can you please provide the link for this notebook?
@anandvamsi1993
@anandvamsi1993 4 жыл бұрын
Hi Krish, Why can't we use mod x (|x|) as an activation function? It will ensure neurons are not completely deactivated + take care of the vanishing gradient issue.
@raghav_birla
@raghav_birla 2 жыл бұрын
It's a good question, and in my opinion the reason is that because the slope becomes 0 at the point of convergence i.e at 0, or anywhere on x axis if the activation function is shifted right or left with the help of bias.
@classictremonti7997
@classictremonti7997 3 жыл бұрын
Hello...When you say the model has "2" or "3" output layers...are you really saying that the "output layer has 2 or 3 neurons"? Just trying to keep the semantics clear in my mind. Thank you!
@vishaljhaveri7565
@vishaljhaveri7565 3 жыл бұрын
Sir please share the jupyter notebook with all of us. Thank you so much. I hope you reply us with the notebook.
@saurabhtiwari4989
@saurabhtiwari4989 3 жыл бұрын
Did you get ?
@nazishiqbal1046
@nazishiqbal1046 2 жыл бұрын
Great work! From where I can get this notebok?
@dineshjayakumar6349
@dineshjayakumar6349 4 жыл бұрын
sir , can you please share this activation funtion jupyter notebook ?
@priyankachore1617
@priyankachore1617 2 жыл бұрын
Thank you sir!!!
@shynie4986
@shynie4986 2 жыл бұрын
I was wondering how to access the notes in your video.
@smarttaurian30
@smarttaurian30 2 жыл бұрын
If ReLU has zero or one output then why we don't value step function?
@chinmaybhat9636
@chinmaybhat9636 2 жыл бұрын
@Krish Naik Sir Can you share the Jupyter Notebook for this in the github ???
@himanshugoel4573
@himanshugoel4573 Жыл бұрын
Hello Krish I am trying to follow your complete Deep Learning playlist but can you share a link to the Jupyter notebook or the documentation that you used in the video to explain everything? It would be a great help to read those notes showin in the vieo. Thanks in advance !
@sachinborgave8094
@sachinborgave8094 Жыл бұрын
did u got this notebook?
@himanshugoel4573
@himanshugoel4573 Жыл бұрын
@@sachinborgave8094 No, I am still waiting for the response
@louerleseigneur4532
@louerleseigneur4532 3 жыл бұрын
Thanks krish
@heecmat4045
@heecmat4045 5 ай бұрын
When sigmoid is used only output layer then why you used it in hidden layer in before videos sir?
@saruaralam2723
@saruaralam2723 4 жыл бұрын
Nice video, Krish, btw where is the video of loss functions, couldn't find it
@NikhilaRaoLakku
@NikhilaRaoLakku Жыл бұрын
Hello Sir, Can you please share the notes that you are explaining from?
@gurucharank5491
@gurucharank5491 3 жыл бұрын
Very nice video sir kindly share the notebook
@guddubhagat7854
@guddubhagat7854 3 жыл бұрын
Link to this notebook file please ?
@muhammedcansoy1434
@muhammedcansoy1434 3 жыл бұрын
Thank you so much
@akashm1027
@akashm1027 2 жыл бұрын
Is there a notebook link to refer?
@PidathalaSoujanya
@PidathalaSoujanya 3 ай бұрын
hello sir,i joined as a member how can I get the material
@deepaklonare9497
@deepaklonare9497 Жыл бұрын
Can you please share the Github link for this document?
@rimanshumangal3517
@rimanshumangal3517 4 жыл бұрын
HI krish, could you please provide link for github where this notebook uploaded and others also.
@suneel8480
@suneel8480 4 жыл бұрын
Can you name the tool with which you are writing on the screen?
@Morais115
@Morais115 4 жыл бұрын
epicpen
@МаксимВасильків-к3о
@МаксимВасильків-к3о 2 жыл бұрын
thx Krish! can I have access to your notebook?
@vivekkumarshaw6495
@vivekkumarshaw6495 2 жыл бұрын
Where can i get these slides
@p_saini
@p_saini 6 ай бұрын
can someone provide me link of this notebook ?
@datasciencewallah9620
@datasciencewallah9620 3 жыл бұрын
I don't understand why you are saying we are not trying to find derivative of 0. Why can't we find derivative of zero in ELu, ReLu, Leaky ReLu function
@KK-rh6cd
@KK-rh6cd 3 жыл бұрын
sir, please share this notebook
@mdmynuddin1888
@mdmynuddin1888 2 жыл бұрын
Can i get the notebook?
@jokerislive1631
@jokerislive1631 6 ай бұрын
bro can you share this jupyter notebook
@lamis_18
@lamis_18 3 жыл бұрын
Can we have this amazing ipynp file?????????????????????????????????????
@nazishiqbal1046
@nazishiqbal1046 2 жыл бұрын
had you get this file?
@AbcAbc-kx3xm
@AbcAbc-kx3xm 4 жыл бұрын
Great!
@smarttaurian30
@smarttaurian30 2 жыл бұрын
You should have made single video for each activation function as it is difficult to get from this video. You are going fast and bit confused in understanding
@RehmanKhan-gb6sk
@RehmanKhan-gb6sk Жыл бұрын
can you share this file please.
@laxmankusuma8994
@laxmankusuma8994 4 жыл бұрын
playback speed 1.5
@gamingultimate7489
@gamingultimate7489 Жыл бұрын
Fooling layer in nlp
@ramakrishnayellela7455
@ramakrishnayellela7455 9 ай бұрын
Sir,Can you send notebook pdf
@shaikrahamatulla-jy2cg
@shaikrahamatulla-jy2cg Жыл бұрын
please provide jupyter notebook
@ameenali1837
@ameenali1837 4 жыл бұрын
Sir ko nahi pta h kya ki derivative of 0 is 0 itself?
@KrishnaMishra-fl6pu
@KrishnaMishra-fl6pu 3 жыл бұрын
Are 0 jab aa jayega toh dot product karte waqt weight update kaise hoga...dead activation ho jayega na
@ganjirao7094
@ganjirao7094 2 жыл бұрын
Good
@pratibhasawant9349
@pratibhasawant9349 4 жыл бұрын
Hi Krish Naik, Your videos are as usual excellent. Could you help me to know what kind of activation function is this? ψ (x) = ( 1 0 ≤ x −1 x < 0 the operator is chosen as ψt (x) = −1 + r (x + t) /|t| − r (x − t)/ |t|
@nikextracool
@nikextracool 4 жыл бұрын
signam, but here the values will have zero impact on the weights as the derivative is 0 on all the regions except 0 (non-defferntiable.)
@ArunKumar-sg6jf
@ArunKumar-sg6jf 3 жыл бұрын
Bro unavailable join u telegram group
@rageadigaming8162
@rageadigaming8162 2 жыл бұрын
share the notes sir >>>>>>
@iskrabesamrtna
@iskrabesamrtna 3 жыл бұрын
every sentence you say two times, good lecture but so much repeating is annoying.
@laykefindley6604
@laykefindley6604 4 жыл бұрын
Holy moly, you jump around more than a locust after snorting the entire amount of cocaine Charlie Sheen snorted before each episode of Two and a Half Men.
@sachinborgave8094
@sachinborgave8094 Жыл бұрын
Thanks Krish, Can you please share this document ?
Session On Different Types Of Loss Function In Deep Learning
1:42:32
The Best Band 😅 #toshleh #viralshort
00:11
Toshleh
Рет қаралды 22 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 354 М.
Softmax Function Explained In Depth with 3D Visuals
17:39
Elliot Waite
Рет қаралды 40 М.
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 502 М.
The Best Band 😅 #toshleh #viralshort
00:11
Toshleh
Рет қаралды 22 МЛН