10.3: Neural Networks: Perceptron Part 2 - The Nature of Code

  Рет қаралды 152,882

The Coding Train

The Coding Train

Күн бұрын

Пікірлер: 158
@adario7
@adario7 7 жыл бұрын
_"Life is just one big refactoring"_ ~Daniel Shiffman, 2017
@rickmonarch4552
@rickmonarch4552 4 жыл бұрын
x'D Yep
@anteconfig5391
@anteconfig5391 7 жыл бұрын
Now I truly understand the need for the bias. Thank You.
@josedomingocajinaramirez5086
@josedomingocajinaramirez5086 6 жыл бұрын
Thanks man! i'm student of physical engineering in México, and i 'm learning a lot with your videos! You're great! Thanks a lot!
@numero7mojeangering
@numero7mojeangering 6 жыл бұрын
The math of the map function is : function map(value, minA, maxA, minB, maxB) { return (1 - ((value - minA) / (maxA - minA))) * minB + ((value - minA) / (maxA - minA)) * maxB; }
@sanchitverma2892
@sanchitverma2892 5 жыл бұрын
no one cares
@somedudeskilivinghislife3739
@somedudeskilivinghislife3739 5 жыл бұрын
I care.
@sanchitverma2892
@sanchitverma2892 5 жыл бұрын
@@somedudeskilivinghislife3739 oof
@somedudeskilivinghislife3739
@somedudeskilivinghislife3739 5 жыл бұрын
@@sanchitverma2892 but no lie, Numero7 Mojeagering is a nerd.
@xrayer4412
@xrayer4412 5 жыл бұрын
thank you for taking your time
@hugomocho8745
@hugomocho8745 6 жыл бұрын
I wish I could have a teacher just like you. Just thank you so much, learning never seemed so fun :)
@MrSleightofhand
@MrSleightofhand 2 жыл бұрын
I know this is an older video and I think you had something like this on the whiteboard at one point but I'm not sure it was fully explained about how the weights/inputs and lines are related. So if anyone is confused hopefully this helps? You can take the equation for a line you probably learned in school: y = mx + b and rearrange it into this form: 0 = mx - y + b m is the slope of the line which we can say is m=rise/run. So: 0 = (rise/run)x - y + b Then if we multiply everything by run we get this: 0 = run(rise/run)x - y(run) + b(run), which simplifies to 0 = x(rise) - y(run) + b(run) But rise, -run and b(run) are all just arbitrary numbers so call them p, q and r. Then the general equation for a line is: 0 = px + qy + r(1) Obviously the multiplication r(1) could just be r but it shows how everything is related: The inputs are x, y and 1 with the coefficients p, q and r being the weights. So a simple perceptron models a line because it's essentially a function which computes the points on a line. Or more specifically, the points (x, y) such that the right-hand side equals zero are on the line, points where the value is negative are on one side of the line and points where the value is positive are on the other. (Apologies if I'm being overly pedantic here. I think you did-and in general on all your videos, do-a great job explaining potentially confusing topics in an easy to understand way. This just struck me as one spot where there might be confusion and I love this kind of thing so I can't help myself.)
@jonathanmartincivriancamac9950
@jonathanmartincivriancamac9950 3 жыл бұрын
after so many tries, thanks to you and 3blue1brown, now I have done my first perceptron, thank you! :D
@robinranabhat3125
@robinranabhat3125 7 жыл бұрын
i only know basic python .yet i understood our videos . YOU ARE THE REAL MAN
@MrRobbyvent
@MrRobbyvent 3 жыл бұрын
it's very enlightening - it's all about abstraction and you can train it to do anything!
@tanmayagarwal8513
@tanmayagarwal8513 4 жыл бұрын
Thank You SOOO much!! I made a perceptron of same kind which has an accuracy score of 1.0. OMG!! I can't imagine!! I made a perceptron! Thank you sooooo much!!
@aleidalimacias9841
@aleidalimacias9841 2 жыл бұрын
Hello!; I'm from Mexico, your videos are great, I just subscribed and I'm amazed how you make it easy to learn all these concepts. You'are doing a really good job and you are helping a lot of people !
@TheCodingTrain
@TheCodingTrain 2 жыл бұрын
Thank you!
@FederationStarShip
@FederationStarShip 5 жыл бұрын
Around 19:50 you start coding it to draw the current version of the line. That's quite nice way to do it by making it guess at two distinct points. I spent a while doing it algebraically from the weights alone. I never though of using the predict/guess fucntionality here!
@lehw916
@lehw916 3 жыл бұрын
This is the hugest whiteboard I've seen in my life!
@halomary4693
@halomary4693 4 жыл бұрын
AWESOME LESSON - THANK you so much for all the painstaking effort to make the videos.
@lucaxtal
@lucaxtal 6 жыл бұрын
Loving your channel!! Great job!!! Processing is really cool in prototyping.
@TheCodingTrain
@TheCodingTrain 6 жыл бұрын
Thank you!
@ac2italy
@ac2italy 6 жыл бұрын
linear regression: you explained gradient without mentioning it ! great
@coolakin
@coolakin 7 жыл бұрын
you're such a delicately beautiful whiteboard scribe. love it
@ronaldluo475
@ronaldluo475 2 жыл бұрын
watching this today this information is timeless
6 жыл бұрын
Watching this video I do not understand much, but reading it from the book is clearer and more conspiratorial, it always happens the other way around, I understand the videos better than the book but in this case it is easier to read it written than in video
@carsonholloway
@carsonholloway 5 жыл бұрын
21:24 - Can somebody explain to me why it's equal to zero?
@MrGenbu
@MrGenbu 5 жыл бұрын
in the perceptron drawing you can see the inputs gets multiplied by weights and summed together then you compare that to a threshold"activation function " which makes it an inequality function wx+wy+wb > 0 , when you draw it you can just make it equal does not matter.
@isaacmuscat5082
@isaacmuscat5082 4 жыл бұрын
Sort of late, but I had trouble with this too. guessY() is supposed to return the y position of the classifier (the line of the perception). And since the range of the activation function is between -1 and 1, the absolute center or divider between whether to label the point green or red is when the activation function(sign in this case) outputs 0. Therefore, the perceptron's decision boundary (the line of the perception) is a line which has the perceptron's prediction set to 0 - the specific number at which the point is neither green nor red (although we label the point as green if the activation function outputs a value >=0). Hope that helps anyone coming here late.
@loubion
@loubion 6 жыл бұрын
Thank you so much, ML is finally understandable for me, even if it's not explained in my native language. Really, infinite thanks
@RahulSharma-oc2qd
@RahulSharma-oc2qd 3 жыл бұрын
at 15:49, We got get 1 as output too, if we would choose thresold function having a value as negative and In such condition zero would be greater than the thresold function and it would fire an output of 1. Am I missing something here?
@zlotnleo
@zlotnleo 7 жыл бұрын
Since you do training in draw() it overtrains it on the input data and any unseen data is very unlikely to be classified correctly in general case. In this case it would work since the line's equation is the same as the calculation in the perceptron. Also, splitting the dataset would allow you to estimate the accuracy and hence analyse if any changes you make are statistically significant. On an unrelated note, may introducing higher powers of inputs into the equation produce useful results? It's clear it would improve classification of points to either side of a parabola, but what would be the best way to generalise it to work with an arbitrary curve?
@loic.bertrand
@loic.bertrand 4 жыл бұрын
There's a dead link in the description for "Source Code from my first Perceptron Coding Challenge:" ^^
@raonioliveira8758
@raonioliveira8758 5 жыл бұрын
I am probably a bit late for this and correct me if I am wrong, but it didn't work because of C, not the bias. It worked anyway because the way to solve it is the same. But when you have a line like: ax + by +c, you have to regard the C when you train the perceptron (adding a bias worked as if you were adding a 'c'). I hope I was able to explain it.
@lorenzopazzification
@lorenzopazzification 7 жыл бұрын
you can make a function that changes learning rate during time by it's own without using any user input(sliders and so..)?
@mrrubixcubeman
@mrrubixcubeman 7 жыл бұрын
Shouldn't 0,0 inputted get outputted as 1 because of the activation function? I thought that after summing everything you saw if it was above or below 0 and then gave it a value of 1 or -1.
@julianabhari7760
@julianabhari7760 7 жыл бұрын
Why does the formula that the neuron is trying to learn have to be equal to zero? The formula you wrote down was "w0(x) + w1(y) + w2(b) = 0" My question is why is it equal to 0?
@blasttrash
@blasttrash 7 жыл бұрын
I think it doesn't matter if its equal to zero or some other number. Hope someone can correct that for me. ax + by + c = 0 can also be represented as ax + by + c = d as you suggested. But now if you take d to LHS it becomes ax + by + (c - d) = 0. One could argue that (c-d) in and of itself is another constant. So we could call (c-d) as a k, so equation now becomes ax + by + k = 0. Which is similar to ax + by + c = 0. The value of constant(or the bias which we usually give as a 1) doesn't really matter as it is only there to make sure of that (0,0) thing that he explained in last video. Let's take an example. Lets say that the desired equation is x + y + 1 = 0. Now lets say that for our algorithm we fed inputs as (0,0,2) instead of say (0,0,1) meaning we are changing bias to be 2 instead of 1(coz we are crazy :P). Now the learning starts and we will end up with something like 2x + 2y + 2 = 0 (assuming that learning gives us the exact line implying that there is a lot of data that we don't end up with some other line that ALSO classifies our data). So 2x + 2y + 2 = 0 is same as x + y + 1 = 0; Meaning that the bias can be any number other than zero(why? coz of last video). So the bias value will not effect whether we will get final line or not. Bias effects other weights however. With a 0.5 bias in previous example we could end up with a line 0.5x + 0.5y + 0.5 = 0 or 0.25x + 0.25y + 0.25 = 0 which are all same as x + y + 1 = 0. So what I am trying to say is that the bias can be anything other than 0, so equating ax + by + c = 0 is pretty much the same as ax + by + c = d(any arbitrary d). Hope I am right and hope I helped. :D :P
@xianfenghor6635
@xianfenghor6635 7 жыл бұрын
I also keep thinking about this question. Any can kindly explain this?
@zendoclone1
@zendoclone1 7 жыл бұрын
The reason is "because math". With the equation "w0(x)+w1(y)+w2(b)=0" we can then make this "w0(x)+w2(b)=-w1(y)" which then becomes "y = -w0(x)/w1-w2(b)/w1"
@TheONLYFranzl
@TheONLYFranzl 7 жыл бұрын
The function x*w0+y*w1+bias has an output which is either >=0 or = 0, set2 contains all the points leading to an output
@gunjanbasak8431
@gunjanbasak8431 6 жыл бұрын
"w0(x) + w1(y) + w2(b) = 0" -> This is the equation of line. You can write it in this way -> " ax + by + c = 0" or "y = mx + c". The actual equation for the straight line in this example is "w0(x1) + w1(x2) + w3(b) = 0". Here 'y' is the output of the neural network. 'x1', 'x2' and 'b' are inputs of the neural network. And 'w0', 'w1' and 'w2' are the weights for the inputs. You may be confused in the 'y' notation, because he used 'y' to denote different things in different diagram. In the equation of straight line, he used it for Y-coordinates, and in the perceptron he used it for the output of the perceptron. Hopefully that makes sense.
@ZIT116rus
@ZIT116rus 6 жыл бұрын
Can't figure out something. Why does (w0*x + w1*y + w2*b) formula should equal to zero?
@PaulGoux
@PaulGoux 4 жыл бұрын
Not sure if you are going to read this but the simple perceptron repot is missing
@cameronnichols9905
@cameronnichols9905 7 жыл бұрын
I was trying to think about a way to have machine learning with tic-tac-toe. Maybe you could do something on this? I was thinking having different weights for every possible placement of the X or O, depending on what is currently on the board.
@williamsokol0
@williamsokol0 4 жыл бұрын
hmm is it possible to make the learning rate different per weight it seems like the bias grows much slower than the others naturally.
@TonyUnderscore
@TonyUnderscore 5 жыл бұрын
I would like to ask some questions which you didn't cover on your video. So this program you made is ment to work with randomly generated inputs and "learn" from these because you also give it the correct answer for each input. This process however is repeated every time and because of that the machine has to "learn" everything from scratch every time. Is it possible to train it in a way that it saves its data so if you decide to input numerous specific values it will already know which is right and which is wrong? Basically, i want to know if there is a way for the neural network to actually teach itself and then keep the "knowledge" it has obtained instead of making more accurate guesses over and over again until you restart it. If anyone replies keep in mind that i am extremely new to this so try explaining everything as much as possible
@DannyGriff97
@DannyGriff97 5 жыл бұрын
Isnt this typically the same concept as a discriminate function ? Similar to saving the "weights" as a discriminate function
@lil_zcrazyg1917
@lil_zcrazyg1917 5 жыл бұрын
@@DannyGriff97Oh my! I'm great at discrimination, do you think I could be of use here?
@DannyGriff97
@DannyGriff97 5 жыл бұрын
Lil_ZcrazyG not that kind of discrimination here ;)
@pow3rstrik3
@pow3rstrik3 7 жыл бұрын
If you are going to refactor, please change the x_ and y_ to x and y and just use this.x = x and this.y = y. (refering to the construction of point)
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Thanks for this feedback!
@magneticking4339
@magneticking4339 4 жыл бұрын
20:20 What if the dividing line is vertical?
@Mezklador
@Mezklador 7 жыл бұрын
Hey Mr. Shiffman! Do you think - at the end of this video - that the gap between the 2 lines represents the error value between the training set and the formula?
@NathanK97
@NathanK97 7 жыл бұрын
no the perceptron just found a function that satisfied the condition.... with more points closer to the line it would be a lot more accurate
@Mezklador
@Mezklador 7 жыл бұрын
Yeah thank you but I've understand that: as the second line is getting close to the "primary" line, the Perceptron is getting more accurate. Right. But... At the end, the spaces between those 2 lines - as it seems at the end of this video - could be a set of data that represents the error marging between Perceptron and the dataset, isn't it? I'm asking that because in Machine Mearning, there's also concepts about accuracy, confidence and error rate, to fine-tune algorithms...
@snackbob100
@snackbob100 4 жыл бұрын
QUESTION: you have a point in a data set of 10 points: point1 =[x, y] for point 1 the error is calculated and the weights are updated for point two does the algorithm take the previously updated weight and then update that and input weights, of which that is reupdated, with this process happening for points in the data set?? if this is the case, surely the order of the data points matters on the final result? for example the first weights are adjusted for point 1, and the weights are adjusted for point 2. could this mean that the adjustment for point 1 could now be redundant, as point two has nudged the weight out of favour for point 1 and into the favour of point 2, eg point 1 = incorrect classification weights adjusted due to error in point 1 point1= correct classification point 2 takes updated-weight point 2 is incorrect updated point2 is incorrect weights update point 2 is correct, point one is incorrect.
@nbgarrett88
@nbgarrett88 5 жыл бұрын
I freaking love the Rogue NASA shirt... #Resist
@epicmonckey25001
@epicmonckey25001 7 жыл бұрын
Hey Dan, I had a thought about your line function, will it still work if you input the formula for a parabola? Keep up the good work, -Alex
@MoDMusse
@MoDMusse 7 жыл бұрын
Nope, doesn't work, but don't know why
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Will discuss more next stream!
@orchisamadas2222
@orchisamadas2222 7 жыл бұрын
The update equations for the weights will change if your function is a parabola. Taking the derivative with respect to m will now give you x^2, so maybe changing the update to error*(input^2) will work.
@ramseshendriks2445
@ramseshendriks2445 6 жыл бұрын
well a line is a line and not a graph
@XKCDism
@XKCDism 7 жыл бұрын
Are you going to cover genetic algorithms combined with neural networks?
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Yup!
@XKCDism
@XKCDism 7 жыл бұрын
Awesome
@marcusbluestone2822
@marcusbluestone2822 4 жыл бұрын
Why does w0x + w1y + w2b = 0? It's not working for my code
@dominiksmeda7203
@dominiksmeda7203 3 жыл бұрын
In my case I had to multiply learning rate for bias by 100 to make it work quickly. Does someone know why?
@torny6650
@torny6650 7 жыл бұрын
the coding train, could you do some basic example of unsupervised learning?
@sky96line
@sky96line 7 жыл бұрын
best video in series.. kudos.
@FredoCorleone
@FredoCorleone Жыл бұрын
How does he arrive that the sum w0•x + w1•y + w2•b must be zero?
@MrGenbu
@MrGenbu 5 жыл бұрын
Why the mapping between -1,1 and then multiplying again in the width , height did not get it why he did not generate them as the last video
@filipanjou2296
@filipanjou2296 7 жыл бұрын
You didn't have to scale down the m value of the line function. Dividing 3 by 10 doesn't "scale it down" but totally changes the slope of the function. (Also, thanks for another great video!)
@TheCodingTrain
@TheCodingTrain 7 жыл бұрын
Thanks for this important clarification!
@algeria7527
@algeria7527 7 жыл бұрын
realy, good job, well done, keep up doing the good staffs
@hfe1833
@hfe1833 5 жыл бұрын
I hope you will make another book for this
@kamilbolka
@kamilbolka 7 жыл бұрын
I have a question: How do you get display Density in processing so all my shapes stay the same size when I change the windows resolution?
@agfd5659
@agfd5659 7 жыл бұрын
Why don't you take a look at the Processing reference page: processing.org/reference/
@gufi7000
@gufi7000 7 жыл бұрын
Dear Senpai Dan/Shiffman/Daniel/TheCrazyCoderFromP5/TheCodingTrain I really like your videos! I attned to the HTL-Braunau (Higher Technical School - Braunau) with the background to learn coding. You are one reason why i want to learn the fascinating of coding. Your Videos are very funny but informing... You do your things with love and this is why i like your style! And one day I want to visit where ever you are and meet you to talk about coding things and your crazy but good ideas. I hope you will read this one day and say:"WoW... I changed someones life." Lg. David F. Ps.:Sorry for my bad english (I'm a 15 Austrian boy)
@Kino-Imsureq
@Kino-Imsureq 7 жыл бұрын
;) u did gud
@S4N0I1
@S4N0I1 7 жыл бұрын
gufi7000 Hey David, Grüße aus Simbach 😀
@gufi7000
@gufi7000 7 жыл бұрын
S4N0I1 Moin 🙃
@julian.2031
@julian.2031 7 жыл бұрын
Maybe you could code a "Revelation 12 Sign" searcher? Would be nice.
@realcygnus
@realcygnus 7 жыл бұрын
superb content.....as per usual
@pradeeshbm5558
@pradeeshbm5558 5 жыл бұрын
Can you please make a video to explain Newton raphson method of optimization...
@TheCodingTrain
@TheCodingTrain 5 жыл бұрын
Please suggest here! github.com/CodingTrain/Rainbow-Topics/issues
@snackbob100
@snackbob100 4 жыл бұрын
Aso, is this an example of gradient decent?
@annac887
@annac887 7 жыл бұрын
This model can be used for data result proximity prediction by using more complex mathematics to create algorithms that have very low incorrect information feedback. Thanks for the video.
@calebprenger3928
@calebprenger3928 6 жыл бұрын
Love your videos. Better than funfunfunction. That's saying alot
@PrasadMadhale
@PrasadMadhale 6 жыл бұрын
I tried out this perceptrons example in Javascript using p5js and it worked properly. But, I was not able to visualize the line which shows algo's current guess. If anyone has completed this tutorial in p5js would you be willing to share the code?
@TheCodingTrain
@TheCodingTrain 6 жыл бұрын
Take a look here: github.com/shiffman/The-Nature-of-Code-Examples-p5.js/tree/master/chp10_nn
@PrasadMadhale
@PrasadMadhale 6 жыл бұрын
That helped. Thanks a lot!
@FredoCorleone
@FredoCorleone Жыл бұрын
Also it doesn't make sense the rise over run analogy because he ends up with x•w0/w1, and that's run over rise...
@PaladinPure
@PaladinPure 7 жыл бұрын
I have a question, do you do any ActionScript tutorials?
@coffeecatrailway
@coffeecatrailway 7 жыл бұрын
float x, y; Point(float x, float y) { this.x = x; this.y = y; }
6 жыл бұрын
x_ y_ was so ugly to me that i started looking for comment like this...
@macsenwyn5004
@macsenwyn5004 4 жыл бұрын
float f(X) says unexpected token x
@blackfox848
@blackfox848 6 жыл бұрын
imagine me take 1 whole day to convert this into java programming language :) i even learned processing language while doing it (WOW! i am proud of my self)
@jackball9081
@jackball9081 5 жыл бұрын
YOU ARE JUST WONDEFUL
@zunairahmed9925
@zunairahmed9925 7 жыл бұрын
which programming language do you use.? and suggestions to learn it
@marufhasan9365
@marufhasan9365 7 жыл бұрын
He is using a language called processing which is build on java. I haven't learn this language yet so i can't give you any advice but if you only want to learn processing just for this series then I think it is not necessary . If you know java you should be able to follow this tutorial. learning java would be more practical choice in that case, if you don't know that already. But if you find processing cool then go right ahead and fulfill your curiosity .
@ИлијаГрбић
@ИлијаГрбић 6 жыл бұрын
Love your videos, you are awesome!!
@zaynbaig3157
@zaynbaig3157 7 жыл бұрын
I am making a video game, should I use p5.js or prosessing? p.s. you are awesome man!
@zaynbaig3157
@zaynbaig3157 7 жыл бұрын
Fulgentius Willy Thanks! I will take that into consideration.
@asharkhan6714
@asharkhan6714 6 жыл бұрын
Hello, I'm in 9th grade and I'm having some problems in learning calculus. So, can you recommend me some resources where I can learn calculus easily?
@TheCodingTrain
@TheCodingTrain 6 жыл бұрын
Hello! Love hearing from high school viewers! I would recommend 3Blue1Brown's calculus series and also maybe Khan academy videos?
@asharkhan6714
@asharkhan6714 6 жыл бұрын
The Coding Train Thank you, I checked out 3blue1brown's essence of calculus series and it's amazing.
@geoffwagner4935
@geoffwagner4935 Жыл бұрын
this must be how a robot knows when he's really crossed the line now
@Chevifier
@Chevifier 2 жыл бұрын
That moment when the AI guesses the line correctly but the line you make is wrong,(Cant figure out where I wrote something wrong)😂
@Chevifier
@Chevifier 2 жыл бұрын
fixed on the Point i was checking if x > lineY instead of y > lineY lol
@grainfrizz
@grainfrizz 7 жыл бұрын
24:36 CAPTCHA of Daniel Shiffman
@Kino-Imsureq
@Kino-Imsureq 7 жыл бұрын
btw why not use 1 instead of bias?
@calebprenger3928
@calebprenger3928 6 жыл бұрын
What really should have been done on this lesson is the training data should differ from the data for guessing.
@TheCodingTrain
@TheCodingTrain 6 жыл бұрын
Great point!
@calebprenger3928
@calebprenger3928 6 жыл бұрын
I think i may have meant to comment on the first video. Sorry :(
@calebprenger3928
@calebprenger3928 5 жыл бұрын
I think your perceptron code link is broken. :(
@sonnymarinho
@sonnymarinho 7 жыл бұрын
Guy... Thanks for your video! You're awsome! =]
@monkeysaregreat
@monkeysaregreat 7 жыл бұрын
I coded an versions of this in python using matplotlib (github.com/ynfle/perceptron#perceptron). Can you take a look? It seems to unable to get close to the actual line, and it seems to have a consistent change in weight.
@monkeysaregreat
@monkeysaregreat 7 жыл бұрын
It works when y = x, but not other numbers
@monkeysaregreat
@monkeysaregreat 7 жыл бұрын
It was just a bug regarding the mapping of the points
@casanpora
@casanpora 4 жыл бұрын
You don't know how much I appreciate this, thanks!!!
@jeffvenancius
@jeffvenancius Жыл бұрын
It's interesting how it looks like that mutation algorithhm
@kamilbolka
@kamilbolka 7 жыл бұрын
Great Video!!! again...
@charbelsarkis3567
@charbelsarkis3567 7 жыл бұрын
Can the line be a curve
@MattRose30000
@MattRose30000 6 жыл бұрын
Charbel Sarkis a single perceptron can only solve linear seperation, so no. Dan explains this in the next video. Try changing the f(x) from 2*x + 1 to x * x + 1 and you will see that it doesn't find a solution
@TheFireBrozTFB
@TheFireBrozTFB 7 жыл бұрын
Make y = radical(x)
@marionnebuhr4598
@marionnebuhr4598 6 жыл бұрын
Why isn't university like this?
@theColorfulRainbow
@theColorfulRainbow 7 жыл бұрын
does anyone have the source code for this...I tried doing it on my own -> messed up -> tried fixing -> gave up -> cried -> and now begging for the source code
@shaunaksen6076
@shaunaksen6076 6 жыл бұрын
Here you go: github.com/ShaunakSen/Data-Science-Updated/tree/master/Math%20of%20Intelligence/The%20Coding%20Train/Simple%20Perceptron/CC_SimplePerceptron2
@joraforever9899
@joraforever9899 7 жыл бұрын
i dont think that a line is a good representation of an equation, what if the equation contained square of x or root of x, the line will represent only the end points of the equation
@MadSandman
@MadSandman 7 жыл бұрын
eQuation
@prateek6502-y4p
@prateek6502-y4p 5 жыл бұрын
Can u make videos of such coding in python!!
@lil_schub
@lil_schub 7 жыл бұрын
It would be really cool if u could do this series with java :D
@lorca3367
@lorca3367 7 жыл бұрын
cure 44 processing is built on java and the code is basically java
@lil_schub
@lil_schub 7 жыл бұрын
no, java and javascript are 2 totally different languages
@lorca3367
@lorca3367 7 жыл бұрын
im confused this is java?
@lil_schub
@lil_schub 7 жыл бұрын
no, its javascript
@lorca3367
@lorca3367 7 жыл бұрын
nah im pretty sure its java
@cassandradawn780
@cassandradawn780 4 жыл бұрын
press 4 if you're on computer (not in comments, just press 4)
@patrickhendron6002
@patrickhendron6002 10 ай бұрын
Perceptr-AI-n 🙂
@hjjol9361
@hjjol9361 7 жыл бұрын
You again ? Why did i look your video each day ??? i don't know.
@ConstantineTvalashvili
@ConstantineTvalashvili 7 жыл бұрын
25:02 \m/
@pedrovelazquez138
@pedrovelazquez138 5 жыл бұрын
So this is the line trying to learn... boy, its really not doing a very good job. 😂😂😂😂😂😂
@renanemilio1943
@renanemilio1943 7 жыл бұрын
geometry dash coding challenge!!! plis
@Nixomia
@Nixomia 7 жыл бұрын
Brick Breaker Game Coding Challenge
@howzeman
@howzeman 6 жыл бұрын
best minute kzbin.info/www/bejne/enjbepZ6n7Wtl8Um45s
@xzencombo3400
@xzencombo3400 7 жыл бұрын
Will you make something creative and stop this machine learning xD?
@Tiara48z
@Tiara48z 7 жыл бұрын
xZen Combo will you go do that on your channel?
10.4: Neural Networks: Multilayer Perceptron Part 1 - The Nature of Code
15:56
Coding Challenge 180: Falling Sand
23:00
The Coding Train
Рет қаралды 1 МЛН
99.9% IMPOSSIBLE
00:24
STORROR
Рет қаралды 31 МЛН
СИНИЙ ИНЕЙ УЖЕ ВЫШЕЛ!❄️
01:01
DO$HIK
Рет қаралды 2,7 МЛН
Une nouvelle voiture pour Noël 🥹
00:28
Nicocapone
Рет қаралды 6 МЛН
So Cute 🥰 who is better?
00:15
dednahype
Рет қаралды 19 МЛН
10.5: Neural Networks: Multilayer Perceptron Part 2 - The Nature of Code
21:29
10.2: Neural Networks: Perceptron Part 1 - The Nature of Code
44:39
The Coding Train
Рет қаралды 504 М.
Coding Challenge 183: Paper Marbling Algorithm
32:10
The Coding Train
Рет қаралды 80 М.
The Art of Code - Dylan Beattie
1:00:49
NDC Conferences
Рет қаралды 4,7 МЛН
Can Water Recognise Numbers? | KNN Digit Recogniser
5:36
PickentCode
Рет қаралды 30 М.
10.12: Neural Networks: Feedforward Algorithm Part 1 - The Nature of Code
27:41
When Optimisations Work, But for the Wrong Reasons
22:19
SimonDev
Рет қаралды 1,1 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 397 М.
I Built a Neural Network from Scratch
9:15
Green Code
Рет қаралды 454 М.
99.9% IMPOSSIBLE
00:24
STORROR
Рет қаралды 31 МЛН