I came to learn, realised I'm not smart enough and stayed for the drawings.
@PandoraMakesGames6 жыл бұрын
If you like AI applied to games you might want to give my channel a check. Cheers!
@musicalbrit34656 жыл бұрын
Daporan self advertising on someone else’s channel isn’t cool, mate
@PandoraMakesGames6 жыл бұрын
I had no bad intentions, but I understand your view.
@DehimVerveen6 жыл бұрын
If you want to learn more about Machine Learning / AI You should give this playlist by Andrew NG a try. kzbin.info/aero/PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN It's really great. I've found these exercises go well with the material: github.com/everpeace/ml-class-assignments/tree/master/downloads
@310garage65 жыл бұрын
I not smart enough so I turned the sound off and looked at the pictures 😉
@camelloy5 жыл бұрын
me, a biologist, hearing him explain biology...yeah thats about right
@310garage65 жыл бұрын
Hey someone here in 2019
@aidanbrennan4265 жыл бұрын
@@310garage6 ok boomer
@macaroon_nuggets80085 жыл бұрын
@@aidanbrennan426 that is not how you use it.
@aidanbrennan4265 жыл бұрын
Macaroon_Nuggets ok boomer
@macaroon_nuggets80085 жыл бұрын
@@aidanbrennan426 knew you would do that
@Dreamer666176 жыл бұрын
By 2:29 seconds i fully understood the concept behind neural networks... I'm third year comp sci and never heard anybody explain this so perfectly. Thank you !! Very impressive !!!
@44kainne6 жыл бұрын
Honestly, I would watch any programming course taught by you in this style.
@sovereigncataclysm5 жыл бұрын
6:25 smooth transition there
@toast_bath59373 жыл бұрын
So smooth I had to click your time stamp to realize there was a transition
@the.starman6 жыл бұрын
This is Ben "Hello, I'm Ben..." "Hello Ben" "...And I'm an anonymous neuron"
@ziquaftynny92856 жыл бұрын
an*
@someoneincognito64456 жыл бұрын
I want Ben to appear in biology books, he's a very pretty neuron.
@robertt93425 жыл бұрын
Isn't the neuron's name Ben, how is he anonymous?
@BillAnt5 жыл бұрын
The sound's too low in the fist part, it's making me neurotic... lol
@warmflatsprite5 жыл бұрын
Hello.
@Chris_Cross5 жыл бұрын
Neuroscientists are just brains trying to figure themselves out...
@AndyOpaleckych5 жыл бұрын
Holy shet. This is too real for me :D
@maximumg994 жыл бұрын
Dats Deap
@notphoenixx1084 жыл бұрын
Esphaav trouth lol
@dootanator_4 жыл бұрын
Christopher Dibbs if you are being a dumb ass don’t worry you are just a meet bag with electricity going through it it is going to happen
@sirpickle23474 жыл бұрын
Christopher Dibbs AAAAAAAAAAAAAAA
@nathangg90185 жыл бұрын
So if the brain is made up of 100 billion neurons, does that mean that if we had computers powerful enough to simulate evolution with creatures of 100 billion neurons, they could eventually become as intelligent as us?
@ardnerusa5 жыл бұрын
Real brain has much more variety in neiron contacts than just 0 and 1. There are ingibitors, circles, and more and more... So we not even close to create computers which could support it. Its much easy to brootforce rar with password than doing so
@otaku-chan48885 жыл бұрын
And that is why people think that Artificial Intelligence might someday develop far enough to rival human beings' smarts.
@otaku-chan48885 жыл бұрын
@@ardnerusa You're right, but on the other hand technology is always advancing, just as if you told people a hundred years ago that it's possible to send a human to that one ball in the sky that goes from crescent to gibbous and back again, or if you told them that a computer could be smaller than your hand and be used to hear someone's voice instead of being a gigantic machine that's as big as two rooms (and can only do basic arithmetic lol) they wouldn't believe you. Quantum computers can already create states that are _neither 0 nor 1_ which opens the possibility for neuron contacts and probabilities. It's already possible to mimic the action of inhibitors and circles, and perhaps in less than a century making a "are you human?" captcha will not even exist because there'll be no problem AI cannot solve. However if by intelligence you mean emotional intelligence...no one can say. Honestly it boggles my mind to think that someone someday can type code which enables a computer to feel emotions like genuine frustration or excitement which isn't pre-programmed. But it may just happen, who knows.
@scptime11884 жыл бұрын
This is exaclty why robots are so limited in their use. Not robots in general, but robots made to do a specific thing, are rubbish at anything else. Our brains, however, hold multiple connections that let us do many, many, many tasks. Combine that with personality, self awareness, intuition and creativity and you have a neutral network beyond anything we can make. At least, for now.
@shot-gi6mr3 жыл бұрын
@@otaku-chan4888 Since evolution created human emotions, it seems likely that one day, when we have neural networks as complicated as our brains, they could learn emotions through evolution algorithms. But I agree that it's probably impossible that a person would be able to manually type code that equates to emotional intelligence.
@dittygoops4 жыл бұрын
CB: I will just run through this, you get it Me: no I don’t
@monkeyrobotsinc.98753 жыл бұрын
Yeah he sux ASS
@Naokarma3 жыл бұрын
@@monkeyrobotsinc.9875 That was not the point of the comment.
@Naokarma3 жыл бұрын
He's just saying you understand 1+1. The input he drew on the bottom right is what he's using to compare to the images on the right. If they match up, it's a +1. If not it's a -1. Red lines= x1, blue lines= x-1.
@youtubeuniversity36386 жыл бұрын
For some reason I want "a bullet of code" to be a code term.
@georgerebreev6 жыл бұрын
It is a bullet of code is just semen shooting out of a shaft
@angelmurchison17316 жыл бұрын
WHIT3B0OY thanks, I hate it
@pranavbadrinathan66935 жыл бұрын
@@georgerebreev Outstanding Move
@thefreeze60235 жыл бұрын
Maybe for read streams and write streams, what you send int a stream can be called a bullet of code, since you *sorta* shoot it
@theterribleanimator17935 жыл бұрын
@@thefreeze6023 a bullet of code is the "scientific" term of having your code break so spectacularly that you just snap, grab a gun and end it.
@SiddheshNan6 жыл бұрын
brain.exe has stopped working
@thelknetwork18834 жыл бұрын
Xaracen it can... under the right circumstances
@TestTest-zt1lx4 жыл бұрын
This is the most helpful video I have seen. The other videos don’t really get into detail of how they work.
@colinbalfour18343 жыл бұрын
"So red is positive and blue is negative" *My life is a lie*
@Thatchxl6 жыл бұрын
I know this video has far less views than some of your other videos, but I'm loving it. Please keep up this tutorial style video and don't be discoruaged. I really appreciate it!
@Tyros11922 жыл бұрын
Funnily enough, in class I am learning how neural networking works, and this video has been quite useful on helping me understand it better.
@illusion94235 жыл бұрын
I'm having an AI test in 6 hours thank you Code Bullet
@marius.13376 жыл бұрын
I would like the video connecting neural networks to genetical ones aswell as a code video. Great stuff man.
@MKBergins2 жыл бұрын
I love your videos, and truly enjoy watching them. I appreciate the time & effort you put into making them, and would love to see more videos like this where you teach others your vast knowledge & skills I’m barely able to make a video a month, so I totally understand the slog. Just thought I’d let you know that I think you’re doing an awesome job. I’ve been a teacher for over a decade, and just want to extend a helping hand if you ever need help in teaching/making educational videos.
@trashcan84476 жыл бұрын
The only thing I heard is "mutatedembabies"
@NStripleseven3 жыл бұрын
And that’s all you need to know...
@venusdandan43475 жыл бұрын
I looked away for like 2 seconds and suddenly I didn't understand and had to rewind. I love the drawings
@oddnap82886 жыл бұрын
These videos are great! Do you plan to do a ANN implementation/coding example, like before? I personally would find that really valuable. Also, any suggestions on practical Neural Network learning resources?
@APMathNerd5 жыл бұрын
I love this, and I'd love to see a video on how to actually combine NNs and the genetic algorithm! Keep up the awesome work :D
@wolfbeats99932 жыл бұрын
Can someone explain where at 6:46 he got the 3rd -1 even though there are only two boxes? I understand how he got -1 it’s just confusing me why there’s three
@bbenny90336 жыл бұрын
wtf your subs doubled since like 3 days ago nice mate
@jorian88346 жыл бұрын
benny boy whaha yea I am one of them, watched one video. Then the offline Google thingy one popped up in recommendations. And then I subscribed ^^ interesting stuff.
@bbenny90336 жыл бұрын
ye its good. nice :3
@PandoraMakesGames6 жыл бұрын
I think you'll like my channel then. I've got AI demo's and will be doing more tutorials soon. Let me how you liked it, cheers!
@PandoraMakesGames6 жыл бұрын
If you like this channel, then you might want to give my channel a check. It's focused around AI. More content and tutorials are coming.
@h4724-q6j6 жыл бұрын
Daporan I'll check it out. I don't normally like advertising on other videos, but you've been nice enough.
@dylanshaffer21744 жыл бұрын
"...And I will probably make a video about combining neural networks and the genetic algorithm sometime in the future" ... Wish that happened, I miss these educational tutorial videos. The new ones are fun though, love your work!
@Oxmond4 жыл бұрын
Great stuff! Thanks! 👍🤓
@austinbritton10293 жыл бұрын
Came here for the knowledge, subbed for the humor
@cryptophoenix65416 жыл бұрын
Congrats on 100K this channel is really growing fast!
@micahgilbertcubing59116 жыл бұрын
Cool! For my CS final project this year I'm doing a basic neural network for simple games (snake, pong, breakout)
@Anthony-kq8im6 жыл бұрын
Good luck!
@PandoraMakesGames6 жыл бұрын
Good luck bro, it's a lot of fun. Check my channel if you need some inspiration for fun games to do AI on.
@taranciucgabrielradu2 жыл бұрын
Funny how I knew literally every single thing in more detail because well... I have a computer science Masters degree. But I still stayed for the entertainment you provide because yes baby
@maxhaibara88286 жыл бұрын
Hi Code Bullet, I have a question about the activation function. There are a lot of activation function, however my teacher said that the best one is Sigmoid (or tanh). But why? And is it really the best, just because it's continuous function? if it is, then can we design our own activation function and actually work well? I know that in CNN they use ReLU instead of Sigmoid. Then what happen if we use Sigmoid on CNN, or even our own activation function? My teacher never answer question seriously, and they just said that it works better when you actually try it. But it still doesn't answer WHY it is the best. It might be better compared to the non-continuous step function, but is it better than all of the activation function? And also, why in my book there's only Sigmoid (or tanh) that is continuous as an activation function! I think this topic will be an interesting tutorial video. Thank you.
@KPkiller16716 жыл бұрын
The reason we use activation functions is to introduce non-linearity into the nn model. Otherwise we can achieve the same thing with a single matrix multiplication. With more layers and non-linear activation functions, the model becomes a non-linear function aproximater. The reason we like to use continuous functions like Sigmoid, Tanh and Relu is because they are easily differentiatable. There is a supervised learning technique called gradient-decent through backpropagation which is used in many tasks instead of Genetic Algorithms. However gradient-decent requires computing the gradient of the weights with respect to the "cost" function (fitness function if you want to think of it that way) of the network. This is a massive chain rule problem and since the step-function in this case has a gradient of 0 as it is not continuous, all of the calculations become 0 making it impossible to use backpropagation. I advise you look up backpropagation and how it works. 3Blue1Brown has an awesome video about it: kzbin.info/www/bejne/f3m9qIp8fbyUY9k Also, Sigmoid is deemed weaker than Tanh and Relu these days. Lots of Tanh and Relu models are dominating with Relu coming out on top. (ofc it really depends on the model's context)
@maxhaibara88286 жыл бұрын
KPkiller1671 but still if we're just talking about differentiable, there might be other activation function that is harder to differentiate but works better. Easy doesn't mean the best. About the polar value that is mentioned by NameName, I never heard about that. I think I'll look for it.
@maxhaibara88286 жыл бұрын
NameName ah ic
@banhai26 жыл бұрын
I'm not sure if sigmoid/tanh is the best way to go all of the time, ReLU for example reduces gradient vanishing and enforces sparsity (if below 0, then activation = 0, which translates into less activations, which tends to be good for reducing over-fitting). The why one is better or worse than the other in different cases can hardly ever be found analytically, though.
@KPkiller16716 жыл бұрын
Max Haibara you will find in machine learning, if something is faster to differentiate your model can be learned faster. Also relu has a neat advantage over all other activation functions. You can construct residual blocks for deeper networks. If the network has too many layers, an identity layer would be needed somewhere. Relu allows the network to do this easily by just seeing the weight matrix of that layer to 0s.
@tnnrhpwd2 жыл бұрын
For anyone confused with the math at 8:00, he did not include his first step of creating the numbers in the far left column. The far left numbers change based on what input is used. I suppose this could be assumed, but it took me a second to realize it.
@zan19714 жыл бұрын
Not studying computer science or anything but this was very interesting! You say stuff like weights and network all the time in your videos so this helps explain how. Gist is layer 1 is input of what you want and is assigned a value. Layer 2 is calculating the layer 1+ neural connections + B. Layer 3 is calculating layer 2 + neural connections + B. These calculations always lead to the correct output because it checks to see if value is more than/ equal or less than/equal to 1. So what are the numbers that will always give you the correct output? That is what the AI is going to decide on after lots of trial and error I'm assuming and it probably always starts with a random value. The neural connections are weights which also the AI decides. So AI evolving is just them guessing the right numbers. Pretty simple.
@davisdiercks6 жыл бұрын
Nice explanation! In future videos it might be a good idea to invest more time in volume balancing though 😂 that one talking section in the middle and the outro music I just got absolutely blasted lol
@filyb6 жыл бұрын
yeees Im so looking forward to your next video! Pls keep it up!
@גרמניתבעברית6 жыл бұрын
I really enjoy learning from your videos
@MisterL2_yt5 жыл бұрын
10:26 wait... no way that you freehand drew that
@christianlira12596 жыл бұрын
Great NN video and thank you CB!
@Amir-tv4nn5 жыл бұрын
Fantastic man. Your videos are great...
@spencerj5 жыл бұрын
I would greatly appreciate the followup video you mentioned about the connection of genetic weight evolution with neural networks
@Skjoldmc6 жыл бұрын
Wow, you explained it so I could understand it. Great job!
@nigaraliyeva76074 жыл бұрын
Wow, very great and simple video!
@BlueNEXUSGaming5 жыл бұрын
@Code Bullet You could use an Info Card to take people to the Video/Channel you mentioned at 5:00
@thalesfernandes42636 жыл бұрын
Hi, I'm trying to implement NEAT in java too, but I'm having problems with speciation, my species is dying very fast, and my algorithm could not solve a simple XOR problem, if you make a video explaining some details about NEAT would be pretty cool, and maybe I can find what I'm doing wrong in my code, I've been able to do several projects using FFNN, but NEAT seems to be much better at finding solutions, especially when you do not know how many layers or neurons you need to complete the task. (I'm Brazilian and I'm going to start the computer sciences course soon, your videos are very good, keep bringing more quality content to youtube and sorry for any spelling mistakes)
@thomaserkes26766 жыл бұрын
I’m watching this and revising science at the same time, cheers mate
@nesurame6 жыл бұрын
This video taught me more about neurons than I learned I school
@ExtraTurtle3 жыл бұрын
Hey, Just a random question. Why did you need to check 4 times on the second level? Couldn't the ones that check for black just return a negative to the third layer instead of having the two yellow ones? the 2nd row and 4th row in the 3rd layer can just connect to both of the neurons in the 3rd layer, but with a blue one to the ones they're not connected to currently will that not work? Edit: I realize that it won't work with the current numbers because if you have 3 blacks and 1 white for example, it will have +2 and -1 and still be positive, but what if we make the negative one much bigger? for example, positive adds 1 and negative just makes it negative. so if there's at least one negative, send 0 no matter what, if all positive, send 1. Also, why does the second level send 0? if it send -1 instead, wouldn't the bias just be redundant? Is this a valid neural network? prnt.sc/10b6pbo Did I make a mistake here? is it not allowed to have blue ones on the second layer? is it not allowed to have numbers other than -1
@NStripleseven3 жыл бұрын
Came for no reason, stayed because big funny and smart
@_aullik6 жыл бұрын
Can you please upload your Dino code to your github
@NewbGamingNetworks6 жыл бұрын
Thanks for the video, bullet!
@bill.i.am1_2935 жыл бұрын
Hey CodeBullet. I’m an upcoming senior in HS and over the past year I’ve found a passion for coding. I’ve been trying to get into ai and ML for the past few months but with no luck. Could you go more into depth with this specific neural network?
@micahgilbertcubing59116 жыл бұрын
Damn this channel exploded recently!
@joridvisser67252 жыл бұрын
Very interesting and I'm still waiting for part 2...
@tctrainconstruct25925 жыл бұрын
A neuron doesn't sum up the inputs then uses the activation function, but to the inputs it adds a "bias". Output = H(b+Si) where H is the activation function, b the bias and Si the sum of hte inputs.
@yashaswikulshreshtha15884 жыл бұрын
Only difference between biological neuron and artificial neuron is one uses chemistry to process data and one uses pure maths
@elijahtommy7775 жыл бұрын
So, if you have a neuron that you want to require 3 positive connections for it to activate, then you would make the bias -2 right?
@310garage65 жыл бұрын
U hurt my head🤕
@elijahtommy7775 жыл бұрын
@@310garage6 haha, oops
@ADogToy4 жыл бұрын
I've gotten so many super long ads for programming courses. CodeB's getting serious audience targeted ads, I hope they're paying out well. Also def not skipping cuz they hit the nail on the head on this one xd
@chthonicone73896 жыл бұрын
Code Bullet, Ben's mother cares about his soma! Seriously though, I was thinking, and I think there is a reason why all of your asteroids playing AIs devolve into spinning and moving. The problem that causes this is your input mechanism. It is too simple. Stop me if I'm wrong, but you explained your input mechanism as having the following inputs: Whether or not there is an asteroid (possibly distance) in each of 8 directions around the ship, ship facing, and ship speed. I do not remember if you give it ship position at all, but that is irrelevant in my opinion. The problem with this setup comes from the fact that to the AI, asteroids seem to vanish from moment to moment as they pass between the directions if they can fit between 2 rays comfortably. As such, the AI really isn't tracking asteroids from moment to moment. My solution is, what if you fed the AI distance, direction, and heading of each of the closest 8 asteroids in order from farthest to closest. This will allow the AI to have some object persistence, as well as actual tracking for the objects. Likely the AIs will be able to develop more complex strategies as a result. The overhead of such an approach is that you have about 3x the number of inputs, and while it's a linear increase in the number of inputs, it may result in an exponential increase in the number of neurons. However, a good GPU will likely be able to handle this. I would be interested in how this affects the AIs you bred, and whether or not they develop more intelligent techniques with the information they would be given.
@KPkiller16716 жыл бұрын
I really think you should ammend your title. I believe as it stands, a lot of new comers to neurual nets are going to think that Genetic Algorithms are the be all and end all of training a neural network. I got caught up in this mess myself before discovering the world of gradient decent (and other optimization techniques) and backpropagation. Of course supervised learning techniques contain a lot more maths and are a fair amount more complex, but I don't think people should be told that this is difinitively how all neural networks work.
@indydiependaele23454 жыл бұрын
I am seeing neural networks rn in Python classes in college, this was very helpfull
@shauryapatel83724 жыл бұрын
Thank you Code Bullet, I am A 10 Year old PCAP and i am trying to learn AI or more specificly DL everywhere I search i don't understand anything but I understood when you told me, and again thankyou
@professordragon2 жыл бұрын
You should definitely do a coded example of this, even if it's 4 years late...
@nikolachristov64975 жыл бұрын
Sorry if I sound really dumb for saying this but wasn't the bias supposed to always give out a 1? If so why does every equation end with -1 when the bias is factored in?
@TomasTomi305 жыл бұрын
3blue 1 brown also made a great video about neural networks definitely worth seeing
@SaplinGuy5 жыл бұрын
6:30 and onwards reminded me so much of tecmath's channel... Like holy shit xD
@Т1000-м1и3 жыл бұрын
420 comments...... there's s saying "I understood the sense of life" which means I understood alot. So uhhh. I now understand alot more about neural networks. The multiple into one is genius and yeah cool. I watched this some time back then but I didn't understand some basis of neuroevolution stuff and didn't understand some things and yeah...... cool Edit: also I forgot to finish the 42.0 420 sense of life thing. It was meant to be at the end of the comment the conclusion and yeah anyways.
@HonkTheMusic5 жыл бұрын
This was surprising easy to follow
@TroubleMakery6 жыл бұрын
Where’s the next part of this series dude? I. Need. It. I need it!
@ilayws44485 жыл бұрын
Amazing as always!
@lostbutfreesoul6 жыл бұрын
I still can't drop this idea of training two networks in a pred-pray format, and letting them go at it for a while....
@liam65505 жыл бұрын
and see what tactics each ai comes up with
@njupasupa19486 жыл бұрын
Thank you Ben i got a four in biology class yesterday.
@eritra43034 жыл бұрын
Did anybody unserstand the bias? I wonder why it is -1 in this example, when he told us the bias Neuron is 1, because even the weight before is 1 so i see no reason why the bias is -1 other than it just works.
@bencematrai73556 жыл бұрын
Thanks! You are really inspiring :D
@aa01blue386 жыл бұрын
With the checkerboard pattern, you can do the exact same thing with 1 XOR gate, 2 XNOR gates and 2 AND gates
@MinecraftingMahem6 жыл бұрын
Please do the video combining genetic algorithm and neural networks. This is great!
@NHSynthonicOrchestra6 жыл бұрын
Here’s an idea, what about putting an AI against a Rhythm game like guitar hero/clone hero or any Rhythm game like necrodancer. Could you possibly make an AI to complete a game?
@bawlsinyojaws89382 жыл бұрын
KZbin keeps recommending this video 18 times even if I watched it
@Appl3Muncher6 жыл бұрын
Very informative thanks for the video
@pace62496 жыл бұрын
love ur vids man more plz
@dominiksmeda72036 жыл бұрын
please teach me this amazing AI. I'm waiting for more. Great Job!
@michaelboyko80836 жыл бұрын
At 9:59 when you re-added the bias neuron wouldn't you subtract 1 from the second-row neurons too or am I just confused.
@BExploit6 жыл бұрын
A coded example would be nice. I like your videos
@dragun0v4026 жыл бұрын
Did you draw this yourself, well that's something.
@SyedAli-mc7on5 жыл бұрын
3:35 Best part of the video
@DashieOmega4 жыл бұрын
Hey, would be great to have this final tutorial example thing combining neural network and genetic algorithms ^^
@amitkeren77716 жыл бұрын
Amazinh vid!!!plz more
@meatkirbo2 жыл бұрын
*sees thumbnail* "Oh let's see if I'm dumb" *watches video*
@deepslates4 жыл бұрын
I didn’t understand the “oversimplified” explanation. Imagine neurosciencists
@楊學翰-m8m6 жыл бұрын
"Ah man this is confusing"😂
@artistanthony10072 жыл бұрын
So uhh if I want it to have a specific function or command, how am I supposed to get this down? I'm trying to have mine be capable of encryption but I'm not sure how I get that in.
@gauravrewaliya32694 жыл бұрын
You make many video on it But still not release how can we practically experiment this
@KernelLeak6 жыл бұрын
6:30 / 11:55 RIP headphones... :( Maybe run your audio clips through a filter like ReplayGain that tries to make sure the audio has about the same overall volume before editing all that stuff together?
@ev3rything5334 жыл бұрын
So how exactly are you using evolution combined with the neural networks? btw this was a great explanation video on explaining the neural networks. I understood the basic concept, but didn't understand how the weights correlated to actual math.
@supremespark24546 жыл бұрын
This is pretty simple For those who are a bit lost think of it as a filter or a yes/no gates you have to pass through
@QuasiGame05 жыл бұрын
So, in a way, aren't ai neural networks, more or less a large string of AND functions with counters that increase/decrease if the result was favourable/correct or not and then comparing the counters to determine in the AND function which should be chosen?
@Salvatorr425 жыл бұрын
For each neuron, rather than a question of "does the input have this feature", it's more of a question of "how well is this feature found in the input". So you might get values of 1.2 if it's detected, 0 if not, -2.6 if the opposite was found, etc. So it's not really a bunch of AND functions, but rather a linear combination of the inputs. eg if you have inputs x1, x2, x3, weights w1, w2, w3, and bias b, then you compute z = x1w1 + x2w2 + x3w3 + b. We then compute y = f(z) for some activation function f, like sigmoid. This activation function lets our network approximate non-linear functions. Also, the usual way that we train networks isn't through increasing/decreasing counters. Instead, we define a loss function which describes how badly the network performed on the given data. Then, we use calculus to figure out how our loss function changes with respect to the weights. This will tell us how to change our weights to decrease the loss, so we update the weights a tiny bit given that info. We keep repeating that until we're satisfied with the network.
@QuasiGame05 жыл бұрын
@@Salvatorr42 Thanks for the response! I'm trying to recreate by using littlebigplanet's logic Wasn't expecting on a year old video haha
@biteshtiwari6 жыл бұрын
Keep posting on AI .I want to learn AI coding and you have the natural teaching abilities.
@abdelbarimessaoud2426 жыл бұрын
hello I am new to this channel checked Brilliant already started learning ANN but I wonder how to actually program with it what program do you use how do I learn to code it and such thank you in advance
@lionel44504 жыл бұрын
I still dont get it... 6:21 how do you get those Calculations?
@nCUX16996 жыл бұрын
Even thou i didn't found anything usefull for me, it was a great video! Just try to get your audio little bit more equal through the video next time
@ther7016 жыл бұрын
0:50 Reminds me i have yet to learn Nervous System for exams