Note: Linear Regression can technically capture nonlinearity in the input, but must be linear in the parameters. Example: w1*w2 is banned. But the Sigmoid function (or some other nonlinearity) in the Neural Network is essential! Without it, the network just defaults to standard Linear Regression, and the hidden layer is useless.
@ab8jeh2 ай бұрын
It is the hidden layer captures non-linear patterns (i.e. Perceptron). XOR would be the simplest example. The sigmoid helps with overfitting non-linear relationships, etc.
@gptLearningHub2 ай бұрын
@@ab8jeh But without the nonlinearity, the Feedforward network collapses into standard Linear Regression, and we effectively lose the hidden layer!
@AmitojSinghMiglani2 ай бұрын
Go over universal approximation theorem if you want to find the real reason of using activation functions
@Memeathon_Dev2 ай бұрын
Another banger 🔥
@u0u79z2 ай бұрын
❤️🔥✨️
@thefatcat-hd6zeАй бұрын
Where is the gradient descent vid linked?
@gptLearningHubАй бұрын
Here it is! kzbin.info/www/bejne/mJO8laSaa9yYo6s
@vv_vv79922 ай бұрын
When explaining linear regression, you said it can't capture non-linear relationships between variables. That's not true at all. You could've included some quadratic or higher order term no problem, so long as it's linear in the parameters (no W^2)
@gptLearningHub2 ай бұрын
@@vv_vv7992 That’s true, Linear Regression must simply be linear in the parameters, but nonlinearity in the variables is allowed. More accurately, without the nonlinearity, the “Neural Network” collapses into standard Linear Regression and we effectively lose all the hidden nodes. Thanks for your comment!