Why Neural Networks Can Learn Any Function

  Рет қаралды 10,949

DataMListic

DataMListic

Күн бұрын

Пікірлер: 38
@datamlistic
@datamlistic Жыл бұрын
Neural nets can learn any function, but they are also prone to overfitting the data. If you are interested in finding out why models overfit and underfit, make sure to check out this video: kzbin.info/www/bejne/a57FiWl_id-hfs0
@SirGisebert
@SirGisebert 2 жыл бұрын
Clear, fluid, and approachable - good job!
@datamlistic
@datamlistic 2 жыл бұрын
Thank you!!
@simonetruglia
@simonetruglia 6 ай бұрын
This is a very good video mate. Thanks for it
@datamlistic
@datamlistic 6 ай бұрын
Thanks! Happy to hear that you liked it! :)
@kraigochieng6395
@kraigochieng6395 8 ай бұрын
thanks. you explained all the parameters well. the animations for how the weights and biases affect the final output really helped. thanks again.
@datamlistic
@datamlistic 8 ай бұрын
Thanks for the feedback! Happy to hear that you liked the explanation! :)
@amatinqureshi4917
@amatinqureshi4917 Жыл бұрын
I was looking for this type of explanation. Thank you !!!
@datamlistic
@datamlistic Жыл бұрын
Glad it was helpful! :)
@exxzxxe
@exxzxxe 11 ай бұрын
Very well done, sir!
@datamlistic
@datamlistic 11 ай бұрын
Thanks! Glad you liked it! :)
@Nzargnalphabet
@Nzargnalphabet 11 ай бұрын
I would suggest x/|x| as a step function that works on discontinuous functions, but it is discontinuous, and actually not defined at zero, it’s better to simply go into the base code and access the sign bit of the number, although it really isn’t very generalizable at all
@datamlistic
@datamlistic 11 ай бұрын
Never heard of x/|x| to be used as a step function. Have you encountered it anywhere implemented in a neural network?
@Nzargnalphabet
@Nzargnalphabet 11 ай бұрын
@@datamlistic no, it’s more of just a thing I’ve been expirimenting with, I was mostly trying to find a better way to approximate the floor function, but it still could have a use here
@AfreediZ
@AfreediZ 3 ай бұрын
Great, please keep going
@datamlistic
@datamlistic 19 күн бұрын
Will do, thanks!🙏
@florin-andreirusu6424
@florin-andreirusu6424 2 жыл бұрын
Nicely explained! Thanks!
@datamlistic
@datamlistic 2 жыл бұрын
Many thanks!!
@neolinksrv
@neolinksrv Жыл бұрын
2:00 What is the operation inside the second neuron?
@datamlistic
@datamlistic Жыл бұрын
That's the sigmoid function. :)
@uc5331
@uc5331 2 жыл бұрын
This was super helpful for me!
@datamlistic
@datamlistic 2 жыл бұрын
Thank you! I am glad it helped you!
@Nzargnalphabet
@Nzargnalphabet 11 ай бұрын
If you could detect a discontinuity and its position, which I know to be possible using limits, you could use such a function in that case
@iacobsorina6924
@iacobsorina6924 2 жыл бұрын
Greart video! Thanks!
@datamlistic
@datamlistic 2 жыл бұрын
Thank you!!!
@JanKowalski-dm5vr
@JanKowalski-dm5vr Жыл бұрын
Nice video, this is exactly what I was wondering about. So adding one node in a layer gives the function 2 new inflection points. But what happens to the function when we add another layer with one node?
@datamlistic
@datamlistic Жыл бұрын
Thanks for this! Hopefully I am not wrong in what I am about to say, but you get 2^(n+1) inflection points if you have n layers. However, the produced regions are symmetrical. Take a look at my 'why we build deep neural networks' video, if you wanna learn more about this. :)
@kadirbasol82
@kadirbasol82 Жыл бұрын
How can we fit infinite sine wave or other infinite series ? Every input automatically mapped to modded output or any better solution for infinite series ?
@datamlistic
@datamlistic Жыл бұрын
That's a very interesting question! Well, infinite problems require infinite solution? You could theoretically fit an infinite sine wave or other infinite series using the logic explained in this video if you had an infinite number of neurons. However, I think you may need another type of neural networks if you wanted to fit such functions with a finite number of neurons (periodical neural nets?). Unfortunately, I am not aware of such networks, but I can search a little bit in the literature for them if you are still interested.
@kadirbasol82
@kadirbasol82 Жыл бұрын
​@@datamlistic Yeah , we are interested in this question.Yes we can make module for sine wave for input.But what if we dont know its infinite series that all the time doing same operation.
@datamlistic
@datamlistic Жыл бұрын
Hmm, to the best of my knowledge, and please correct me if I am wrong, then it's impossible to approximate such functions for an infinte number of points, without prior knowledge about the fact that the data is periodical or without an infinite number of neurons. Even more, by using the construction logic in this video, you can't approximate perfectly a function within a certain range without having an infinite number of neurons.
@florinrusu3409
@florinrusu3409 2 жыл бұрын
Awesome!
@datamlistic
@datamlistic 2 жыл бұрын
Thanks mate!
@ikartikthakur
@ikartikthakur 5 ай бұрын
then does that mean if a function needs 30 steps to approximate.. you'll need 30 hidden neurons is that's the analogy.
@datamlistic
@datamlistic 5 ай бұрын
yeah... but it's easier to visualize it this way
@ikartikthakur
@ikartikthakur 5 ай бұрын
@@datamlistic Yes ..I really liked this animation. Thank a lot for sharing..
@SuperMaDBrothers
@SuperMaDBrothers Жыл бұрын
But this isn't how actual NNs learn functions at all! You could just use intuition to reason this, it's pretty obvious!
@datamlistic
@datamlistic Жыл бұрын
Thank you for your feedback! This a correct observation. That's indeed not how NNs learn and are used in practice. However, the following is just a theoretical proof which shows that through the intelligent manipulation of NNs weights you can approximate any continuous function. It's a nice thing to know and be aware of when working with NNs, because not all statistical models have this property.
Why Residual Connections (ResNet) Work
4:58
DataMListic
Рет қаралды 8 М.
Dendrites: Why Biological Neurons Are Deep Neural Networks
25:28
Artem Kirsanov
Рет қаралды 241 М.
How to treat Acne💉
00:31
ISSEI / いっせい
Рет қаралды 108 МЛН
Une nouvelle voiture pour Noël 🥹
00:28
Nicocapone
Рет қаралды 9 МЛН
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,5 МЛН
Can you really use ANY activation function? (Universal Approximation Theorem)
8:21
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 425 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,4 МЛН
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 508 М.
Attention in transformers, visually explained | DL6
26:10
3Blue1Brown
Рет қаралды 2 МЛН
Why Do We Need Activation Functions in Neural Networks?
14:32
NeuralNine
Рет қаралды 3,4 М.
Can We Build an Artificial Hippocampus?
23:51
Artem Kirsanov
Рет қаралды 218 М.
How to treat Acne💉
00:31
ISSEI / いっせい
Рет қаралды 108 МЛН