Dear Krish: we all love you, your energy, your enthusiasm. One point about derivative of ReLU activation Function at zero. To express it properly, derivative of ReLU as x tends to zero does not exist because the derivative is a step function and at zero, it is discontinuous. And limit approaching from left is not equal to limit approaching from right. The ReLU function is continuous, not-bounded and not zero-centered. At x = 0, the left hand derivative of ReLU is zero while right hand derivative is 1. Since the left hand derivative and the right hand derivative are not equal at x =0, ReLU function is not differentiable at x = 0. Derivative of Leaky ReLU at zero is still discontinuous. Hence, it is still not differentiable at zero. Generally, in Geophysics, we use Leaky ReLU as follows: f(x) = 0.1*x if x0 Else, I am your Bhakt and I want to start your (iNeuron) - 3 courses as soon as I reach India. Master Machine Learning Masters Deep Learning Masters NLP. Congratulations for your new position as CTO at iNeuron. Cheers, Roy
@ratulghosh38494 жыл бұрын
Thank you sir you really made the concepts related to different activation functions so clear.
@satwindersingh1797 Жыл бұрын
U r ultra best teacher Sir .. Ultimate and even better...
@Sudeepdas3134 жыл бұрын
Thank you so much sir for taking out time and effort to put out such great content!
@rahuldey63694 жыл бұрын
33:58 just a question- Do we have mutually exclusive results when apply sigmoid like softmax? Because the 60% 40% scenario will be there when we have mutually exclusive results where total probability sums to zero as you've mentioned
@raghav_birla2 жыл бұрын
Bro he actually said wrong in the video, sigmoid function doesn't sum up to 1 and also binary classification is that in which we only have one output node in the output layer, not 2 that he described
@raghav_birla2 жыл бұрын
And that's the only reason why we can allow sigmoid to be used in the output layer, because it calculates or say squashes the x between 0 and 1 independent of other x values that we have in output layers
@nguyenngocly14844 жыл бұрын
f(x)=x is connect. f(x)=0 is disconnect. ReLU is then a switch. A ReLU neural net is a switched system of dot products. Fast transforms like the FFT and fast Walsh Hadamard transform are fixed systems of dot products that you are free to mix in.
@pritamH3 жыл бұрын
Make theoretical video on white board..because most of the people familiar with that..Than you krish..big fan of you🖤🖤
@hariharans94082 жыл бұрын
Such a Amazing sir, Please put a link for this notebook in the discription that will help us to revise more about this
@arpanghosh38014 жыл бұрын
can you share the github link for the code
@DeepakSaini-sg3pq4 жыл бұрын
Thank you sir for making this video really very helpful and sir can you please provide us this notebook.
@kr2ik3 жыл бұрын
@Alden Lewis 100% working
@anandhiselvi31744 жыл бұрын
Please do video on svm kernels
@rohanyewale31844 жыл бұрын
Really very Helpful !!! Can I get this notebook?
@the-ghost-in-the-machine1108 Жыл бұрын
27:45 No, the Swish activation function is not zero-centered
@kavuluridattasriharsha3 жыл бұрын
Hi Krish Can you please provide the Activation functions notebook for our reference.
@imranriaz97524 жыл бұрын
I am very thankful to you dear sir for uploading such nice and well explained videos. Engr. Imran Riaz Lecturer Department of Electrical Engineering MUST Mirpur A.K. Pakistan
@chitramethwani47582 жыл бұрын
hi Krish.. can you pls provide the link for this notebook? Great content and nice explanation.. :)
@randhirpratapsingh97954 жыл бұрын
Hi Krish.... Well explained... Could you please help me with the Jupyter notebook for this activation functions script....
@hashimhafeez213 жыл бұрын
thank you brother for such a nice explanation.
@VisheshPanchal-c6j15 күн бұрын
Dear Krish: Can you create playlist or course for pytorch and tesnrflow and jax combine. In that you will overt all concept like in D2l book.
@arijitmukherjee82932 жыл бұрын
Hi Krish, thanks for the lovely explanation. I have one question. Why does zero centered data converge faster. Can anyone explain this?
@techsavy56693 жыл бұрын
How can i get the ipynb file that you described here ! Thank you.
@abhinaykumar88754 жыл бұрын
Very informative video sir Please can you share the link to notebook
@dr.ratnapatil92723 жыл бұрын
Wonderful session.
@anjubhagat8257 Жыл бұрын
Sir, you have explained very well. Thank you. I have the data from the year 1987-88 to 2021-22 of mango production and I want to apply ANN. so please give me code for Rstudio and also tell me which activation function, learning rate and how many hidden layers are best for the problem.
@rafibasha18403 жыл бұрын
@15:00 when we multiply negative value with .01 how it became positive value
@shreesapkota3 жыл бұрын
Good explanation. Thank you.
@dr.pushpalathamanagement12764 жыл бұрын
very clearly explained .
@vipnirala2 жыл бұрын
Great content. Thank you.
@shubhibansal70162 жыл бұрын
Hi, where can we find jupyter notebooks or the notes for the videos?
@Dragon48OO3 жыл бұрын
Naik Sir can you make fully explained vedio on YOLO algorithm with its working program
@nagamanid89263 жыл бұрын
Thank you Sir. My question is-Can we use both SVM and Softmax simultaneously in CNN for classification
@meedoremee96224 жыл бұрын
Can you download your machine learning that you used in your teaching?
@rafibasha18402 жыл бұрын
Hi Krish,please share the notebook you are using
@saurabhtripathi624 жыл бұрын
thanks for updating.
@borgavejs4379 Жыл бұрын
Thanks Krish, Can you please share this document ?
@koustavdutta53173 жыл бұрын
@Krish Naik, sir please kindly share the notebook. Its very much required for self revision and from notes point of view.
@arjyabasu13114 жыл бұрын
Sir please upload the notebook !!
@AbhishekMishra-nl6by3 жыл бұрын
from which repository I can find this file in your GITHUB
@balasaraswathiyugandher31763 жыл бұрын
16:35 ELU
@hareeshr39793 жыл бұрын
Saved my time thanks
@bcinerd2 жыл бұрын
could you please upload this notebook in video description?
@devmaharaj13 жыл бұрын
Super , Liked a LOTTTTTTTTTT !!!!!!!!!!!
@MrKB_SSJ22 жыл бұрын
Thanks a lot 😊
@vimalgupta862 жыл бұрын
Hi Krish, can you please provide the link for this notebook?
@anandvamsi19934 жыл бұрын
Hi Krish, Why can't we use mod x (|x|) as an activation function? It will ensure neurons are not completely deactivated + take care of the vanishing gradient issue.
@raghav_birla2 жыл бұрын
It's a good question, and in my opinion the reason is that because the slope becomes 0 at the point of convergence i.e at 0, or anywhere on x axis if the activation function is shifted right or left with the help of bias.
@classictremonti79973 жыл бұрын
Hello...When you say the model has "2" or "3" output layers...are you really saying that the "output layer has 2 or 3 neurons"? Just trying to keep the semantics clear in my mind. Thank you!
@vishaljhaveri75653 жыл бұрын
Sir please share the jupyter notebook with all of us. Thank you so much. I hope you reply us with the notebook.
@saurabhtiwari49893 жыл бұрын
Did you get ?
@nazishiqbal10462 жыл бұрын
Great work! From where I can get this notebok?
@dineshjayakumar63494 жыл бұрын
sir , can you please share this activation funtion jupyter notebook ?
@priyankachore16172 жыл бұрын
Thank you sir!!!
@shynie49862 жыл бұрын
I was wondering how to access the notes in your video.
@smarttaurian302 жыл бұрын
If ReLU has zero or one output then why we don't value step function?
@chinmaybhat96362 жыл бұрын
@Krish Naik Sir Can you share the Jupyter Notebook for this in the github ???
@himanshugoel4573 Жыл бұрын
Hello Krish I am trying to follow your complete Deep Learning playlist but can you share a link to the Jupyter notebook or the documentation that you used in the video to explain everything? It would be a great help to read those notes showin in the vieo. Thanks in advance !
@sachinborgave8094 Жыл бұрын
did u got this notebook?
@himanshugoel4573 Жыл бұрын
@@sachinborgave8094 No, I am still waiting for the response
@louerleseigneur45323 жыл бұрын
Thanks krish
@heecmat40455 ай бұрын
When sigmoid is used only output layer then why you used it in hidden layer in before videos sir?
@saruaralam27234 жыл бұрын
Nice video, Krish, btw where is the video of loss functions, couldn't find it
@NikhilaRaoLakku Жыл бұрын
Hello Sir, Can you please share the notes that you are explaining from?
@gurucharank54913 жыл бұрын
Very nice video sir kindly share the notebook
@guddubhagat78543 жыл бұрын
Link to this notebook file please ?
@muhammedcansoy14343 жыл бұрын
Thank you so much
@akashm10272 жыл бұрын
Is there a notebook link to refer?
@PidathalaSoujanya3 ай бұрын
hello sir,i joined as a member how can I get the material
@deepaklonare9497 Жыл бұрын
Can you please share the Github link for this document?
@rimanshumangal35174 жыл бұрын
HI krish, could you please provide link for github where this notebook uploaded and others also.
@suneel84804 жыл бұрын
Can you name the tool with which you are writing on the screen?
@Morais1154 жыл бұрын
epicpen
@МаксимВасильків-к3о2 жыл бұрын
thx Krish! can I have access to your notebook?
@vivekkumarshaw64952 жыл бұрын
Where can i get these slides
@p_saini6 ай бұрын
can someone provide me link of this notebook ?
@datasciencewallah96203 жыл бұрын
I don't understand why you are saying we are not trying to find derivative of 0. Why can't we find derivative of zero in ELu, ReLu, Leaky ReLu function
@KK-rh6cd3 жыл бұрын
sir, please share this notebook
@mdmynuddin18882 жыл бұрын
Can i get the notebook?
@jokerislive16316 ай бұрын
bro can you share this jupyter notebook
@lamis_183 жыл бұрын
Can we have this amazing ipynp file?????????????????????????????????????
@nazishiqbal10462 жыл бұрын
had you get this file?
@AbcAbc-kx3xm4 жыл бұрын
Great!
@smarttaurian302 жыл бұрын
You should have made single video for each activation function as it is difficult to get from this video. You are going fast and bit confused in understanding
@RehmanKhan-gb6sk Жыл бұрын
can you share this file please.
@laxmankusuma89944 жыл бұрын
playback speed 1.5
@gamingultimate7489 Жыл бұрын
Fooling layer in nlp
@ramakrishnayellela74559 ай бұрын
Sir,Can you send notebook pdf
@shaikrahamatulla-jy2cg Жыл бұрын
please provide jupyter notebook
@ameenali18374 жыл бұрын
Sir ko nahi pta h kya ki derivative of 0 is 0 itself?
@KrishnaMishra-fl6pu3 жыл бұрын
Are 0 jab aa jayega toh dot product karte waqt weight update kaise hoga...dead activation ho jayega na
@ganjirao70942 жыл бұрын
Good
@pratibhasawant93494 жыл бұрын
Hi Krish Naik, Your videos are as usual excellent. Could you help me to know what kind of activation function is this? ψ (x) = ( 1 0 ≤ x −1 x < 0 the operator is chosen as ψt (x) = −1 + r (x + t) /|t| − r (x − t)/ |t|
@nikextracool4 жыл бұрын
signam, but here the values will have zero impact on the weights as the derivative is 0 on all the regions except 0 (non-defferntiable.)
@ArunKumar-sg6jf3 жыл бұрын
Bro unavailable join u telegram group
@rageadigaming81622 жыл бұрын
share the notes sir >>>>>>
@iskrabesamrtna3 жыл бұрын
every sentence you say two times, good lecture but so much repeating is annoying.
@laykefindley66044 жыл бұрын
Holy moly, you jump around more than a locust after snorting the entire amount of cocaine Charlie Sheen snorted before each episode of Two and a Half Men.
@sachinborgave8094 Жыл бұрын
Thanks Krish, Can you please share this document ?