Very nice explanation of the concept, brief and understandable. Awesome!
@vt14542 жыл бұрын
As always, great video from IBM
@John-wx3zn9 ай бұрын
It is wrong.
@handsanitizer2457 Жыл бұрын
Wow best explanation ever 👏
@krishnakeshav23 Жыл бұрын
Good explanation. It is somewhat also important to note that curve should be differentiable.
@Akanniafelumo2 ай бұрын
The best explanation I have had ever, in fact till now
@davidrempel433 Жыл бұрын
The most confusing part of this video is how he managed to write everything backwards on the glass so flawlessly
@sanataeeb969 Жыл бұрын
can't they write on their normal side then flip the video?
@sirpsychosexy Жыл бұрын
@@sanataeeb969 no that would be way too easy
@waliyudin86 Жыл бұрын
Bro just focus on the gradient descent topic
@P4INKiller Жыл бұрын
@@sanataeeb969Oh shit, you're clever.
@smritibasnet97824 ай бұрын
Nope he isnt writing backward..you can observe he seems to be using left hand to write ,but in actual right hand was being used
@krissatish879 ай бұрын
The best video i could find. Thank you.
@57-tycm-ii-karanshardul2824 күн бұрын
Thankyou sir.
@cyrcesarkore2 ай бұрын
Very simple and clear explanation. Thank you!
@Adnanuni2 ай бұрын
Thank you for such an amazing explaination Martin. Thanks a lot team IBM
@nurudeenmohammediyam92213 күн бұрын
whats the difference between entropy and cost function
@Shrimant-ub4ul6 ай бұрын
Thank You Martin , really helpful for my uni exam
@hugaexpl0it Жыл бұрын
Very good explanation of high-level concept on GD.
@harshsonar9346 Жыл бұрын
Im always confused by these screens or boards, whatever. Like how do you write on them? Do you have to write backwards or do you write normally and it kinda mirrors it?
@FaberLSH5 ай бұрын
Thank you so much!
@sotirismoschos775 Жыл бұрын
didn't know Steve Kerr works at IBM
@s.m.rakibhasan5525 Жыл бұрын
great lecture
@SAZlearn_AI3 ай бұрын
Let me clarify the concept of learning rate and step size in gradient descent: Learning rate: The learning rate is a hyperparameter that we set before starting the optimization process. It's a fixed value that determines how large our steps will be in general. Step size: The actual size of each step is determined by both the learning rate and the gradient at that point. Specifically: step_size = learning_rate * magnitude_of_gradient So: The learning rate itself is not the size of the steps from point to point. The learning rate is a constant that helps determine how big those steps will be. The actual size of each step can vary, even with a constant learning rate, because it also depends on the gradient at each point. To visualize this: In steep areas of the loss function (large gradient), the steps will be larger. In flatter areas (small gradient), the steps will be smaller. The learning rate acts as a general "scaling factor" for all these steps.
@_alekss2 жыл бұрын
Nice I learned more from this 7 min video than 1 hour long boring lecture
@velo13372 жыл бұрын
ibm: "how to make a neural network for the stock market?"
@Justme-dk7vm8 ай бұрын
ANY CHANCE TO GIVE 1000 LIKES???😩
@John-wx3zn9 ай бұрын
Your neural network is wrong.
@slimeminem74024 ай бұрын
Yeah the neurons are not fully connected 1:43
@Rajivrocks-Ltd. Жыл бұрын
I was expecting a mathematical explanation :(
@abdulhamidabdullahimagama93342 жыл бұрын
I couldn't visualise, I saw nothing on the screen...