How Does a Neural Network Work in 60 seconds? The BRAIN of an AI

  Рет қаралды 84,629

Arvin Ash

Arvin Ash

11 ай бұрын

Full Video here: • How the BRAIN of an AI...
This video answers the question "How do Neural networks work?"
#neuralnetworks
A neuron in a neural network is a processor, which is essentially a function with some parameters. This function takes in inputs, and after processing the inputs, it creates an output, which can be passed along to another neuron. Like neurons in the brain, artificial neurons can also be connected to each other via synapses. While an individual neuron can be simple and might not do anything impressive, it’s the networking that makes them so powerful. And that network is the core of artificial intelligence systems.

Пікірлер: 67
@ArvinAsh
@ArvinAsh 11 ай бұрын
Full video on how the Brain of an AI works is here: kzbin.info/www/bejne/hKm3hYuritFggsU
@Outchange
@Outchange 12 күн бұрын
Thankyou 👏🏽
@muhammadfaizanalibutt4602
@muhammadfaizanalibutt4602 22 күн бұрын
You forgot the non linearity function
@mrspook4789
@mrspook4789 Ай бұрын
Unfortunately this type of neural net has zero plasticity and cannot learn on its own. That must change someday.
@caldeira_a
@caldeira_a Ай бұрын
no? it does learn as it changes the weight and bias
@mrspook4789
@mrspook4789 Ай бұрын
@@caldeira_a It's not capable of doing that when it's running though and the pace of witch one learns is very slow. They adapt they don't learn. It effectively a much more advanced version of a decision tree. Liquid neural nets and spiking neural nets come much closer to learning however but we do not use those as they are more difficult to control. Also convolutional neural nets are not temporally aware and they can't think as they are built to be very linear. True learning involves taking new data understanding it by using previous data and then applying the new data in a way that is appropriate to context. Convolutional neural nets only do 50% of this as they can understand new data with existing data but they can't really act on it much with the weights being changed witch doesn't happen with the net alone and learning would also imply a capacity for multiple tasks witch a convolutional neural nets cannot do well as a consequence to there vary linear design. Transformers are better than convolutional neural nets but they have mostly the same problems. A liquid nureal network and spiking neural network can adjust there own effectives and learn autonomously with being retrained and they constantly retrain themselves like biological nureal network.
@caldeira_a
@caldeira_a Ай бұрын
@@mrspook4789 at this point you're just using semantics, the process where you say it "adapts" it isn't just adaptation, it takes in its mistakes and attempts to correct them, increasing it's own accuracy. Sure, it may not be self aware and thus not be straight up literal artificial intelligence but it's learning nonetheless
@mrspook4789
@mrspook4789 Ай бұрын
@@caldeira_a no it isn't. That's like saying that a computer learned a new task If you reprogram it to do something completely different. Traditional neural nets cannot "learn" on their own that mechanism is done externally. For example a few companies once sent several chatbots into social media apps as an experiment to watch them "learn" and technically a chatbot knows rights from wrongs as that is within its knowledge however those chatbots became racist anyways and the reason because of this is because they're programming was altered due to the statistical patterns of language it was receiving but if it were never retrained it would have never became racist. They don't learn they are just adapted to serve a function and the way that works is through back propagation were you already have the answer and you send the answer back through the neural net in a way that changes the weights and biases literally rewriting the neural nets code to best match the answer and in that case with the chatbot the answer was a bunch of racism. Learning requires you to be aware of previous events and of what you are receiving and the ability to act upon it and convolutional neural nets do not do this neither do Transformers however transformers can do something kind of close to learning. Transformers are often equipped with programs that give them short-term memory that allow them to look at several sentences of text and generator response based on context in this even allows The transformer AI to learn within the extent of its own short-term memory however the training data is not changed it will always have the same behavior and a short-term memory is not unlimited which means things learned within short-term memory will eventually be lost as the training data will prevail as that is permanent. This is where liquid neural nets, spiking neural nets and biological neural nets like brains come to shine because there training data and memory and experience are completely the same but with a transformer and convolutional neural net they are completely separate.
@petronikandrov7593
@petronikandrov7593 2 ай бұрын
One of the best explanations
@verizonextron
@verizonextron 2 ай бұрын
whater
@Masonicon
@Masonicon 3 ай бұрын
Neural Networks that uses Negativity Bias are doomerbots
@Anaeijon
@Anaeijon 5 ай бұрын
good explanation and great visuals BUT you are missing the importance of a neurons activation function here. Without it, the whole neural network basically shrinks down to a linear regression. Adding an activation function turns the regression into something like a logistic regression. A logistic regression with a verry hard cut basically is (mathematically) identical to a perceptron, which is the simplest form of a neuron. Adding multiple of these together creates a multilayered perceptron (short: MLP). Big MLP are what we call 'artificial neural networks'.
@WeyardWiz
@WeyardWiz 24 күн бұрын
So what is the activation function and how to combine with this one, in simple terms?
@warrenarnold
@warrenarnold 7 ай бұрын
I hate meth, i love math😅
@PeaceNinja007
@PeaceNinja007 7 ай бұрын
Are you saying my bias can be physically weighed? Cuz I surely have a heavy ass bias.
@danielmoore4311
@danielmoore4311 8 ай бұрын
Is this the linear regression equation? Why not the sigmoid equation?
@timmygilbert4102
@timmygilbert4102 8 ай бұрын
This explain nothing, the mul is a filter, the addition is the decibel measure, the bias is the threshold. Basically low bias encode and logic, high bias encode or logic, so it encode a sliding logic. 2 layers encode xor logic. Therefore neural network encode three sets operation, discrimination, composition and equivalency.
@WeyardWiz
@WeyardWiz 24 күн бұрын
Bruh
@timmygilbert4102
@timmygilbert4102 23 күн бұрын
@@WeyardWiz bruh what 🤔
@WeyardWiz
@WeyardWiz 23 күн бұрын
@@timmygilbert4102 We have no idea what you just said
@timmygilbert4102
@timmygilbert4102 23 күн бұрын
@@WeyardWiz that's sad, it's English. The formula of a neuron is sum of inputs x weight, then the result is added to a bias value, and submit to the activation function that does a thresholding, IE it activate if the sum is above a value defined by the bias. So the original multiplication is simply filtering the input, IE multiplication by zero remove the contribution of that input, by one it let pass the input value unchanged. Thus only relevant value are taken into account. The sum is basically telling how strong of a signal we have from the input after filtering. The bias shift the sensibility up or down before the activation function. If the signal after bias is strong enough, the activation function trigger it's output to be further processed in the next layer as input. If the bias is low, the signal don't have to be strong, even a single input passing through the filtering will trigger the neuron. IE similar to OR logic. But if the bias is high, all input filtered need to be high, IE the signal need to be strong to activate the neuron. That's equivalent to AND logic. Any bias between low and high create a spectrum between these two logic.
@WeyardWiz
@WeyardWiz 23 күн бұрын
@@timmygilbert4102 Well that's much more thorough and easier to grasp, thnx
@mlab3051
@mlab3051 8 ай бұрын
Missing activation function... Non linear is an important part of ANN. You should not miss that.
@hdsz7738
@hdsz7738 9 ай бұрын
I can finally add AI into my CV
@WeyardWiz
@WeyardWiz 24 күн бұрын
😂
@nasamind
@nasamind 9 ай бұрын
Awesome
@derekgeorgeandrews
@derekgeorgeandrews 9 ай бұрын
I thought the function of a neuron was slightly more involved than this? I thought it was some kind of logarithmic response to the input not a purely linear function?
@WeyardWiz
@WeyardWiz 24 күн бұрын
Yes it's more complicated of course but this is the basic formula. Determining w and b is where you need crazy math lol
@user-hl6ls8sv4t
@user-hl6ls8sv4t 9 ай бұрын
What elementary school he went to ☠️
@ocean645
@ocean645 9 ай бұрын
I am now seeing the importance of my discrete mathematics class.
@Oscar-vs5yw
@Oscar-vs5yw 9 ай бұрын
This is a very dumbed down explanation, i can understand wanting to avoid the linear algebra but making the dot product into multiplication between 2 variables and calling it "elementary math" seems extremely misleading as those 2 variables represent maybe thousands of values
@baileym4708
@baileym4708 9 ай бұрын
Simple equation from elementary school: f(x) = Z(x) = w * x+ b ....hahahaha Maybe high school.
@______IV
@______IV 9 ай бұрын
So…nothing like organic neurons then?
@DJpiya1
@DJpiya1 10 ай бұрын
This is not fully true, X is not multiplied by W. Both are vectors and this is the dot product of W and X. Not multiplication.
@ancientheart2532
@ancientheart2532 10 ай бұрын
Simple equation from elementary school? I didn't learns functions in grade school.
@kbee225
@kbee225 10 ай бұрын
So it's fitting a linear model per factor?
@ChathuraPerera
@ChathuraPerera 10 ай бұрын
Very good explanation
@BackYardScience2000
@BackYardScience2000 10 ай бұрын
I don't know where you went to elementary school at, but we didn't learn physics or equations until at least the 6th or 7th grades, let alone things like this. Lmao!
@shivvu4461
@shivvu4461 6 ай бұрын
Same Lmao
@arielpirante2
@arielpirante2 10 ай бұрын
i substitute chatgpt for google searches. in the future maybe it will be all chatgpt like softwares and companies will fight over the market of AI and the resource to fight with is the Data. bec Ai needs data.
@hoagie911
@hoagie911 10 ай бұрын
... but don't most neutral networks use sigmoid, not linear, functions?
@badabing3391
@badabing3391 10 ай бұрын
you right i think
@rishianand153
@rishianand153 5 ай бұрын
Yea sigmoid function is used to map the value you get from linear function to range between [0,1] which is used as activation value
@YUSUFFAWWAZBINFADHLULLAHMoe
@YUSUFFAWWAZBINFADHLULLAHMoe 10 ай бұрын
“Schematic of a simple artificial neural network”
@FrancisGo.
@FrancisGo. 10 ай бұрын
@Zeero3846
@Zeero3846 10 ай бұрын
Training is fixing both the input and output and then solving for the weights and bias. Then, once you get the weights and bias close enough to the outputs expect from the given inputs, you fix the weights and bias and evaluate the outputs on arbitrary inputs, or at least inputs you weren't using in the training data. If the training went well, then the outputs will largely be correct. Note, this mostly works with what's called supervised learning, which requires you to have a training data set with known inputs and outputs. One trick that's often used to increase the confidence in the training process is to divide the training set into two similar sets. The first half is used for training, and the second is used to measure how well it did. The idea is that training should extrapolate well to data it was never trained on, but because the second half's output is already known, you'll actually have data on the ready to actually measure the effectiveness of the training. If you just move on straight to inputs taken from the wild, you'll need human intervention to do the measuring, which you might as well do ahead of time.
@jbruck6874
@jbruck6874 10 ай бұрын
Question: what is the reason that (numerical) "solving for weights and biases" is *possible* in practice in the case of a larger ANN? And that with a simple gradient descent...!? An ANN model has 10^4 to ^9 parameters, ie the equation has that many Variables... In case of nonlinear systems, one would be *very* lucky to get a solver algorithm that delivers good results. Is there a deeper conceptual answer why this works with coupled perceptron model- equations?
@gpt-jcommentbot4759
@gpt-jcommentbot4759 10 ай бұрын
@@jbruck6874 Because they don't just use gradient descent they also have extra optimizers too. For why they generalize and don't just overfit to everything, we don't know. We just know that Convolutional NNs converge onto interpretable image features. And that reverse engineering sentiment Recurrent NNs reveal line attractor dynamics, and converge to simpler representations than theoretically possible
@yashaswikulshreshtha1588
@yashaswikulshreshtha1588 2 ай бұрын
There is one principle in the world, 1) Everything works on supply and demand. And also neural network use atomic abstractions that create a fluid essence in which the abstractions of outer world can be absorbed and reflected.
@chenwilliam5176
@chenwilliam5176 10 ай бұрын
Mathematics used to describe ANN is very simple ❤
@thymos6575
@thymos6575 10 ай бұрын
nahh you gotta make a whole course on this you're too good at explaining
@leafloaf5054
@leafloaf5054 10 ай бұрын
That is what I thought. He'd make us pros
@jaimeduncan6167
@jaimeduncan6167 10 ай бұрын
It’s because he is simplifying big time , yo start it’s a vector equation, and then the training and the network construction are the funny part.
@TM_Makeover
@TM_Makeover 4 ай бұрын
​@@jaimeduncan6167 I wann a know more about it
@tabasdezh
@tabasdezh 10 ай бұрын
👏
@jeevan88888
@jeevan88888 10 ай бұрын
excep ha i involves marice mulipliscaion.
@subhodeepmondal7937
@subhodeepmondal7937 10 ай бұрын
Those who are fooling themselves with this video just try to understand backpropagation😂😂😂. It is not simple at all.
@GregMatoga
@GregMatoga 10 ай бұрын
There's like a thousand explanatory videos about how NN work and like none actually using it for anything * useful *
@OmniGuy
@OmniGuy 11 ай бұрын
He learned this equation in ELEMENTARY SCHOOL ???
@lidarman2
@lidarman2 10 ай бұрын
y = mx + b.....but applied to a large ass matrix. He oversimplifed it because the training phase is very iterative and computational intensive.
@TM_Makeover
@TM_Makeover 4 ай бұрын
I wanna know more about it​@@lidarman2
@constantinvasiliev2065
@constantinvasiliev2065 11 ай бұрын
Thanks. Very simply explained!
@Beerbatter1962
@Beerbatter1962 11 ай бұрын
Equation of a line, y=mx+b, in matrix form.
@yashrajshah7766
@yashrajshah7766 11 ай бұрын
Awesome simple explanation!
@RIVERANIEVESZ
@RIVERANIEVESZ 10 ай бұрын
Oh...no wonder...😊
@adityapatil325
@adityapatil325 10 ай бұрын
Stolen from @3blue1brown
@snap-off5383
@snap-off5383 11 ай бұрын
But NO, our brains couldn't be working this way and we couldn't possibly be biological machinery. . . right? The main difference is that we take 24 hours or so to create a newly trained network, and AI on silicon is millisecond or less for a new updated neural network. The "math" of AI was able not only to learn chess better than any human in 9 hours, but able to beat the best human created program.
How the BRAIN of an AI Works: Shockingly Simple but Genius!
14:34
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 209 М.
How many pencils can hold me up?
00:40
A4
Рет қаралды 14 МЛН
Who Will Eat The Porridge First The Cockroach Or Me? 👧vs🪳
00:26
Giggle Jiggle
Рет қаралды 21 МЛН
I Need Your Help..
00:33
Stokes Twins
Рет қаралды 16 МЛН
Neural Network Architectures & Deep Learning
9:09
Steve Brunton
Рет қаралды 767 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
Can a Boat Float In Supercritical Fluid?
9:13
The Action Lab
Рет қаралды 168 М.
But what is a neural network? | Chapter 1, Deep learning
18:40
3Blue1Brown
Рет қаралды 16 МЛН
How are memories stored in neural networks? | The Hopfield Network #SoME2
15:14
But what is a convolution?
23:01
3Blue1Brown
Рет қаралды 2,5 МЛН
Why Neural Networks can learn (almost) anything
10:30
Emergent Garden
Рет қаралды 1,2 МЛН
Transformers, explained: Understand the model behind GPT, BERT, and T5
9:11
Carregando telefone com carregador cortado
1:01
Andcarli
Рет қаралды 1,5 МЛН
Power up all cell phones.
0:17
JL FUNNY SHORTS
Рет қаралды 46 МЛН
How Neuralink Works 🧠
0:28
Zack D. Films
Рет қаралды 31 МЛН
IPad Pro fix screen
1:01
Tamar DB (mt)
Рет қаралды 4,8 МЛН
Как я сделал домашний кинотеатр
0:41
RICARDO
Рет қаралды 1,5 МЛН