_"Life is just one big refactoring"_ ~Daniel Shiffman, 2017
@rickmonarch45524 жыл бұрын
x'D Yep
@anteconfig53917 жыл бұрын
Now I truly understand the need for the bias. Thank You.
@josedomingocajinaramirez50866 жыл бұрын
Thanks man! i'm student of physical engineering in México, and i 'm learning a lot with your videos! You're great! Thanks a lot!
@numero7mojeangering6 жыл бұрын
The math of the map function is : function map(value, minA, maxA, minB, maxB) { return (1 - ((value - minA) / (maxA - minA))) * minB + ((value - minA) / (maxA - minA)) * maxB; }
@sanchitverma28925 жыл бұрын
no one cares
@somedudeskilivinghislife37395 жыл бұрын
I care.
@sanchitverma28925 жыл бұрын
@@somedudeskilivinghislife3739 oof
@somedudeskilivinghislife37395 жыл бұрын
@@sanchitverma2892 but no lie, Numero7 Mojeagering is a nerd.
@xrayer44125 жыл бұрын
thank you for taking your time
@hugomocho87456 жыл бұрын
I wish I could have a teacher just like you. Just thank you so much, learning never seemed so fun :)
@MrSleightofhand2 жыл бұрын
I know this is an older video and I think you had something like this on the whiteboard at one point but I'm not sure it was fully explained about how the weights/inputs and lines are related. So if anyone is confused hopefully this helps? You can take the equation for a line you probably learned in school: y = mx + b and rearrange it into this form: 0 = mx - y + b m is the slope of the line which we can say is m=rise/run. So: 0 = (rise/run)x - y + b Then if we multiply everything by run we get this: 0 = run(rise/run)x - y(run) + b(run), which simplifies to 0 = x(rise) - y(run) + b(run) But rise, -run and b(run) are all just arbitrary numbers so call them p, q and r. Then the general equation for a line is: 0 = px + qy + r(1) Obviously the multiplication r(1) could just be r but it shows how everything is related: The inputs are x, y and 1 with the coefficients p, q and r being the weights. So a simple perceptron models a line because it's essentially a function which computes the points on a line. Or more specifically, the points (x, y) such that the right-hand side equals zero are on the line, points where the value is negative are on one side of the line and points where the value is positive are on the other. (Apologies if I'm being overly pedantic here. I think you did-and in general on all your videos, do-a great job explaining potentially confusing topics in an easy to understand way. This just struck me as one spot where there might be confusion and I love this kind of thing so I can't help myself.)
@jonathanmartincivriancamac99503 жыл бұрын
after so many tries, thanks to you and 3blue1brown, now I have done my first perceptron, thank you! :D
@robinranabhat31257 жыл бұрын
i only know basic python .yet i understood our videos . YOU ARE THE REAL MAN
@MrRobbyvent3 жыл бұрын
it's very enlightening - it's all about abstraction and you can train it to do anything!
@tanmayagarwal85134 жыл бұрын
Thank You SOOO much!! I made a perceptron of same kind which has an accuracy score of 1.0. OMG!! I can't imagine!! I made a perceptron! Thank you sooooo much!!
@aleidalimacias98412 жыл бұрын
Hello!; I'm from Mexico, your videos are great, I just subscribed and I'm amazed how you make it easy to learn all these concepts. You'are doing a really good job and you are helping a lot of people !
@TheCodingTrain2 жыл бұрын
Thank you!
@FederationStarShip5 жыл бұрын
Around 19:50 you start coding it to draw the current version of the line. That's quite nice way to do it by making it guess at two distinct points. I spent a while doing it algebraically from the weights alone. I never though of using the predict/guess fucntionality here!
@lehw9163 жыл бұрын
This is the hugest whiteboard I've seen in my life!
@halomary46934 жыл бұрын
AWESOME LESSON - THANK you so much for all the painstaking effort to make the videos.
@lucaxtal6 жыл бұрын
Loving your channel!! Great job!!! Processing is really cool in prototyping.
@TheCodingTrain6 жыл бұрын
Thank you!
@ac2italy6 жыл бұрын
linear regression: you explained gradient without mentioning it ! great
@coolakin7 жыл бұрын
you're such a delicately beautiful whiteboard scribe. love it
@ronaldluo4752 жыл бұрын
watching this today this information is timeless
6 жыл бұрын
Watching this video I do not understand much, but reading it from the book is clearer and more conspiratorial, it always happens the other way around, I understand the videos better than the book but in this case it is easier to read it written than in video
@carsonholloway5 жыл бұрын
21:24 - Can somebody explain to me why it's equal to zero?
@MrGenbu5 жыл бұрын
in the perceptron drawing you can see the inputs gets multiplied by weights and summed together then you compare that to a threshold"activation function " which makes it an inequality function wx+wy+wb > 0 , when you draw it you can just make it equal does not matter.
@isaacmuscat50824 жыл бұрын
Sort of late, but I had trouble with this too. guessY() is supposed to return the y position of the classifier (the line of the perception). And since the range of the activation function is between -1 and 1, the absolute center or divider between whether to label the point green or red is when the activation function(sign in this case) outputs 0. Therefore, the perceptron's decision boundary (the line of the perception) is a line which has the perceptron's prediction set to 0 - the specific number at which the point is neither green nor red (although we label the point as green if the activation function outputs a value >=0). Hope that helps anyone coming here late.
@loubion6 жыл бұрын
Thank you so much, ML is finally understandable for me, even if it's not explained in my native language. Really, infinite thanks
@RahulSharma-oc2qd3 жыл бұрын
at 15:49, We got get 1 as output too, if we would choose thresold function having a value as negative and In such condition zero would be greater than the thresold function and it would fire an output of 1. Am I missing something here?
@zlotnleo7 жыл бұрын
Since you do training in draw() it overtrains it on the input data and any unseen data is very unlikely to be classified correctly in general case. In this case it would work since the line's equation is the same as the calculation in the perceptron. Also, splitting the dataset would allow you to estimate the accuracy and hence analyse if any changes you make are statistically significant. On an unrelated note, may introducing higher powers of inputs into the equation produce useful results? It's clear it would improve classification of points to either side of a parabola, but what would be the best way to generalise it to work with an arbitrary curve?
@loic.bertrand4 жыл бұрын
There's a dead link in the description for "Source Code from my first Perceptron Coding Challenge:" ^^
@raonioliveira87585 жыл бұрын
I am probably a bit late for this and correct me if I am wrong, but it didn't work because of C, not the bias. It worked anyway because the way to solve it is the same. But when you have a line like: ax + by +c, you have to regard the C when you train the perceptron (adding a bias worked as if you were adding a 'c'). I hope I was able to explain it.
@lorenzopazzification7 жыл бұрын
you can make a function that changes learning rate during time by it's own without using any user input(sliders and so..)?
@mrrubixcubeman7 жыл бұрын
Shouldn't 0,0 inputted get outputted as 1 because of the activation function? I thought that after summing everything you saw if it was above or below 0 and then gave it a value of 1 or -1.
@julianabhari77607 жыл бұрын
Why does the formula that the neuron is trying to learn have to be equal to zero? The formula you wrote down was "w0(x) + w1(y) + w2(b) = 0" My question is why is it equal to 0?
@blasttrash7 жыл бұрын
I think it doesn't matter if its equal to zero or some other number. Hope someone can correct that for me. ax + by + c = 0 can also be represented as ax + by + c = d as you suggested. But now if you take d to LHS it becomes ax + by + (c - d) = 0. One could argue that (c-d) in and of itself is another constant. So we could call (c-d) as a k, so equation now becomes ax + by + k = 0. Which is similar to ax + by + c = 0. The value of constant(or the bias which we usually give as a 1) doesn't really matter as it is only there to make sure of that (0,0) thing that he explained in last video. Let's take an example. Lets say that the desired equation is x + y + 1 = 0. Now lets say that for our algorithm we fed inputs as (0,0,2) instead of say (0,0,1) meaning we are changing bias to be 2 instead of 1(coz we are crazy :P). Now the learning starts and we will end up with something like 2x + 2y + 2 = 0 (assuming that learning gives us the exact line implying that there is a lot of data that we don't end up with some other line that ALSO classifies our data). So 2x + 2y + 2 = 0 is same as x + y + 1 = 0; Meaning that the bias can be any number other than zero(why? coz of last video). So the bias value will not effect whether we will get final line or not. Bias effects other weights however. With a 0.5 bias in previous example we could end up with a line 0.5x + 0.5y + 0.5 = 0 or 0.25x + 0.25y + 0.25 = 0 which are all same as x + y + 1 = 0. So what I am trying to say is that the bias can be anything other than 0, so equating ax + by + c = 0 is pretty much the same as ax + by + c = d(any arbitrary d). Hope I am right and hope I helped. :D :P
@xianfenghor66357 жыл бұрын
I also keep thinking about this question. Any can kindly explain this?
@zendoclone17 жыл бұрын
The reason is "because math". With the equation "w0(x)+w1(y)+w2(b)=0" we can then make this "w0(x)+w2(b)=-w1(y)" which then becomes "y = -w0(x)/w1-w2(b)/w1"
@TheONLYFranzl7 жыл бұрын
The function x*w0+y*w1+bias has an output which is either >=0 or = 0, set2 contains all the points leading to an output
@gunjanbasak84316 жыл бұрын
"w0(x) + w1(y) + w2(b) = 0" -> This is the equation of line. You can write it in this way -> " ax + by + c = 0" or "y = mx + c". The actual equation for the straight line in this example is "w0(x1) + w1(x2) + w3(b) = 0". Here 'y' is the output of the neural network. 'x1', 'x2' and 'b' are inputs of the neural network. And 'w0', 'w1' and 'w2' are the weights for the inputs. You may be confused in the 'y' notation, because he used 'y' to denote different things in different diagram. In the equation of straight line, he used it for Y-coordinates, and in the perceptron he used it for the output of the perceptron. Hopefully that makes sense.
@ZIT116rus6 жыл бұрын
Can't figure out something. Why does (w0*x + w1*y + w2*b) formula should equal to zero?
@PaulGoux4 жыл бұрын
Not sure if you are going to read this but the simple perceptron repot is missing
@cameronnichols99057 жыл бұрын
I was trying to think about a way to have machine learning with tic-tac-toe. Maybe you could do something on this? I was thinking having different weights for every possible placement of the X or O, depending on what is currently on the board.
@williamsokol04 жыл бұрын
hmm is it possible to make the learning rate different per weight it seems like the bias grows much slower than the others naturally.
@TonyUnderscore5 жыл бұрын
I would like to ask some questions which you didn't cover on your video. So this program you made is ment to work with randomly generated inputs and "learn" from these because you also give it the correct answer for each input. This process however is repeated every time and because of that the machine has to "learn" everything from scratch every time. Is it possible to train it in a way that it saves its data so if you decide to input numerous specific values it will already know which is right and which is wrong? Basically, i want to know if there is a way for the neural network to actually teach itself and then keep the "knowledge" it has obtained instead of making more accurate guesses over and over again until you restart it. If anyone replies keep in mind that i am extremely new to this so try explaining everything as much as possible
@DannyGriff975 жыл бұрын
Isnt this typically the same concept as a discriminate function ? Similar to saving the "weights" as a discriminate function
@lil_zcrazyg19175 жыл бұрын
@@DannyGriff97Oh my! I'm great at discrimination, do you think I could be of use here?
@DannyGriff975 жыл бұрын
Lil_ZcrazyG not that kind of discrimination here ;)
@pow3rstrik37 жыл бұрын
If you are going to refactor, please change the x_ and y_ to x and y and just use this.x = x and this.y = y. (refering to the construction of point)
@TheCodingTrain7 жыл бұрын
Thanks for this feedback!
@magneticking43394 жыл бұрын
20:20 What if the dividing line is vertical?
@Mezklador7 жыл бұрын
Hey Mr. Shiffman! Do you think - at the end of this video - that the gap between the 2 lines represents the error value between the training set and the formula?
@NathanK977 жыл бұрын
no the perceptron just found a function that satisfied the condition.... with more points closer to the line it would be a lot more accurate
@Mezklador7 жыл бұрын
Yeah thank you but I've understand that: as the second line is getting close to the "primary" line, the Perceptron is getting more accurate. Right. But... At the end, the spaces between those 2 lines - as it seems at the end of this video - could be a set of data that represents the error marging between Perceptron and the dataset, isn't it? I'm asking that because in Machine Mearning, there's also concepts about accuracy, confidence and error rate, to fine-tune algorithms...
@snackbob1004 жыл бұрын
QUESTION: you have a point in a data set of 10 points: point1 =[x, y] for point 1 the error is calculated and the weights are updated for point two does the algorithm take the previously updated weight and then update that and input weights, of which that is reupdated, with this process happening for points in the data set?? if this is the case, surely the order of the data points matters on the final result? for example the first weights are adjusted for point 1, and the weights are adjusted for point 2. could this mean that the adjustment for point 1 could now be redundant, as point two has nudged the weight out of favour for point 1 and into the favour of point 2, eg point 1 = incorrect classification weights adjusted due to error in point 1 point1= correct classification point 2 takes updated-weight point 2 is incorrect updated point2 is incorrect weights update point 2 is correct, point one is incorrect.
@nbgarrett885 жыл бұрын
I freaking love the Rogue NASA shirt... #Resist
@epicmonckey250017 жыл бұрын
Hey Dan, I had a thought about your line function, will it still work if you input the formula for a parabola? Keep up the good work, -Alex
@MoDMusse7 жыл бұрын
Nope, doesn't work, but don't know why
@TheCodingTrain7 жыл бұрын
Will discuss more next stream!
@orchisamadas22227 жыл бұрын
The update equations for the weights will change if your function is a parabola. Taking the derivative with respect to m will now give you x^2, so maybe changing the update to error*(input^2) will work.
@ramseshendriks24456 жыл бұрын
well a line is a line and not a graph
@XKCDism7 жыл бұрын
Are you going to cover genetic algorithms combined with neural networks?
@TheCodingTrain7 жыл бұрын
Yup!
@XKCDism7 жыл бұрын
Awesome
@marcusbluestone28224 жыл бұрын
Why does w0x + w1y + w2b = 0? It's not working for my code
@dominiksmeda72033 жыл бұрын
In my case I had to multiply learning rate for bias by 100 to make it work quickly. Does someone know why?
@torny66507 жыл бұрын
the coding train, could you do some basic example of unsupervised learning?
@sky96line7 жыл бұрын
best video in series.. kudos.
@FredoCorleone Жыл бұрын
How does he arrive that the sum w0•x + w1•y + w2•b must be zero?
@MrGenbu5 жыл бұрын
Why the mapping between -1,1 and then multiplying again in the width , height did not get it why he did not generate them as the last video
@filipanjou22967 жыл бұрын
You didn't have to scale down the m value of the line function. Dividing 3 by 10 doesn't "scale it down" but totally changes the slope of the function. (Also, thanks for another great video!)
@TheCodingTrain7 жыл бұрын
Thanks for this important clarification!
@algeria75277 жыл бұрын
realy, good job, well done, keep up doing the good staffs
@hfe18335 жыл бұрын
I hope you will make another book for this
@kamilbolka7 жыл бұрын
I have a question: How do you get display Density in processing so all my shapes stay the same size when I change the windows resolution?
@agfd56597 жыл бұрын
Why don't you take a look at the Processing reference page: processing.org/reference/
@gufi70007 жыл бұрын
Dear Senpai Dan/Shiffman/Daniel/TheCrazyCoderFromP5/TheCodingTrain I really like your videos! I attned to the HTL-Braunau (Higher Technical School - Braunau) with the background to learn coding. You are one reason why i want to learn the fascinating of coding. Your Videos are very funny but informing... You do your things with love and this is why i like your style! And one day I want to visit where ever you are and meet you to talk about coding things and your crazy but good ideas. I hope you will read this one day and say:"WoW... I changed someones life." Lg. David F. Ps.:Sorry for my bad english (I'm a 15 Austrian boy)
@Kino-Imsureq7 жыл бұрын
;) u did gud
@S4N0I17 жыл бұрын
gufi7000 Hey David, Grüße aus Simbach 😀
@gufi70007 жыл бұрын
S4N0I1 Moin 🙃
@julian.20317 жыл бұрын
Maybe you could code a "Revelation 12 Sign" searcher? Would be nice.
@realcygnus7 жыл бұрын
superb content.....as per usual
@pradeeshbm55585 жыл бұрын
Can you please make a video to explain Newton raphson method of optimization...
This model can be used for data result proximity prediction by using more complex mathematics to create algorithms that have very low incorrect information feedback. Thanks for the video.
@calebprenger39286 жыл бұрын
Love your videos. Better than funfunfunction. That's saying alot
@PrasadMadhale6 жыл бұрын
I tried out this perceptrons example in Javascript using p5js and it worked properly. But, I was not able to visualize the line which shows algo's current guess. If anyone has completed this tutorial in p5js would you be willing to share the code?
@TheCodingTrain6 жыл бұрын
Take a look here: github.com/shiffman/The-Nature-of-Code-Examples-p5.js/tree/master/chp10_nn
@PrasadMadhale6 жыл бұрын
That helped. Thanks a lot!
@FredoCorleone Жыл бұрын
Also it doesn't make sense the rise over run analogy because he ends up with x•w0/w1, and that's run over rise...
@PaladinPure7 жыл бұрын
I have a question, do you do any ActionScript tutorials?
x_ y_ was so ugly to me that i started looking for comment like this...
@macsenwyn50044 жыл бұрын
float f(X) says unexpected token x
@blackfox8486 жыл бұрын
imagine me take 1 whole day to convert this into java programming language :) i even learned processing language while doing it (WOW! i am proud of my self)
@jackball90815 жыл бұрын
YOU ARE JUST WONDEFUL
@zunairahmed99257 жыл бұрын
which programming language do you use.? and suggestions to learn it
@marufhasan93657 жыл бұрын
He is using a language called processing which is build on java. I haven't learn this language yet so i can't give you any advice but if you only want to learn processing just for this series then I think it is not necessary . If you know java you should be able to follow this tutorial. learning java would be more practical choice in that case, if you don't know that already. But if you find processing cool then go right ahead and fulfill your curiosity .
@ИлијаГрбић6 жыл бұрын
Love your videos, you are awesome!!
@zaynbaig31577 жыл бұрын
I am making a video game, should I use p5.js or prosessing? p.s. you are awesome man!
@zaynbaig31577 жыл бұрын
Fulgentius Willy Thanks! I will take that into consideration.
@asharkhan67146 жыл бұрын
Hello, I'm in 9th grade and I'm having some problems in learning calculus. So, can you recommend me some resources where I can learn calculus easily?
@TheCodingTrain6 жыл бұрын
Hello! Love hearing from high school viewers! I would recommend 3Blue1Brown's calculus series and also maybe Khan academy videos?
@asharkhan67146 жыл бұрын
The Coding Train Thank you, I checked out 3blue1brown's essence of calculus series and it's amazing.
@geoffwagner4935 Жыл бұрын
this must be how a robot knows when he's really crossed the line now
@Chevifier2 жыл бұрын
That moment when the AI guesses the line correctly but the line you make is wrong,(Cant figure out where I wrote something wrong)😂
@Chevifier2 жыл бұрын
fixed on the Point i was checking if x > lineY instead of y > lineY lol
@grainfrizz7 жыл бұрын
24:36 CAPTCHA of Daniel Shiffman
@Kino-Imsureq7 жыл бұрын
btw why not use 1 instead of bias?
@calebprenger39286 жыл бұрын
What really should have been done on this lesson is the training data should differ from the data for guessing.
@TheCodingTrain6 жыл бұрын
Great point!
@calebprenger39286 жыл бұрын
I think i may have meant to comment on the first video. Sorry :(
@calebprenger39285 жыл бұрын
I think your perceptron code link is broken. :(
@sonnymarinho7 жыл бұрын
Guy... Thanks for your video! You're awsome! =]
@monkeysaregreat7 жыл бұрын
I coded an versions of this in python using matplotlib (github.com/ynfle/perceptron#perceptron). Can you take a look? It seems to unable to get close to the actual line, and it seems to have a consistent change in weight.
@monkeysaregreat7 жыл бұрын
It works when y = x, but not other numbers
@monkeysaregreat7 жыл бұрын
It was just a bug regarding the mapping of the points
@casanpora4 жыл бұрын
You don't know how much I appreciate this, thanks!!!
@jeffvenancius Жыл бұрын
It's interesting how it looks like that mutation algorithhm
@kamilbolka7 жыл бұрын
Great Video!!! again...
@charbelsarkis35677 жыл бұрын
Can the line be a curve
@MattRose300006 жыл бұрын
Charbel Sarkis a single perceptron can only solve linear seperation, so no. Dan explains this in the next video. Try changing the f(x) from 2*x + 1 to x * x + 1 and you will see that it doesn't find a solution
@TheFireBrozTFB7 жыл бұрын
Make y = radical(x)
@marionnebuhr45986 жыл бұрын
Why isn't university like this?
@theColorfulRainbow7 жыл бұрын
does anyone have the source code for this...I tried doing it on my own -> messed up -> tried fixing -> gave up -> cried -> and now begging for the source code
@shaunaksen60766 жыл бұрын
Here you go: github.com/ShaunakSen/Data-Science-Updated/tree/master/Math%20of%20Intelligence/The%20Coding%20Train/Simple%20Perceptron/CC_SimplePerceptron2
@joraforever98997 жыл бұрын
i dont think that a line is a good representation of an equation, what if the equation contained square of x or root of x, the line will represent only the end points of the equation
@MadSandman7 жыл бұрын
eQuation
@prateek6502-y4p5 жыл бұрын
Can u make videos of such coding in python!!
@lil_schub7 жыл бұрын
It would be really cool if u could do this series with java :D
@lorca33677 жыл бұрын
cure 44 processing is built on java and the code is basically java
@lil_schub7 жыл бұрын
no, java and javascript are 2 totally different languages
@lorca33677 жыл бұрын
im confused this is java?
@lil_schub7 жыл бұрын
no, its javascript
@lorca33677 жыл бұрын
nah im pretty sure its java
@cassandradawn7804 жыл бұрын
press 4 if you're on computer (not in comments, just press 4)
@patrickhendron600210 ай бұрын
Perceptr-AI-n 🙂
@hjjol93617 жыл бұрын
You again ? Why did i look your video each day ??? i don't know.
@ConstantineTvalashvili7 жыл бұрын
25:02 \m/
@pedrovelazquez1385 жыл бұрын
So this is the line trying to learn... boy, its really not doing a very good job. 😂😂😂😂😂😂
@renanemilio19437 жыл бұрын
geometry dash coding challenge!!! plis
@Nixomia7 жыл бұрын
Brick Breaker Game Coding Challenge
@howzeman6 жыл бұрын
best minute kzbin.info/www/bejne/enjbepZ6n7Wtl8Um45s
@xzencombo34007 жыл бұрын
Will you make something creative and stop this machine learning xD?