Create a Simple Neural Network in Python from Scratch

  Рет қаралды 775,878

Polycode

Polycode

Күн бұрын

Пікірлер: 713
@JonasBostoen
@JonasBostoen 6 жыл бұрын
In the next video we’re going to be making a blockchain in JavaScript, so subscribe if you’re interested in that stuff!
@SoumilShah
@SoumilShah 6 жыл бұрын
great video so made everything so easy
@CrypticConsole
@CrypticConsole 5 жыл бұрын
Dow stupid schools blocked pip and zip archives so I can't install numpy
@itstatanka
@itstatanka 5 жыл бұрын
Which compilar did you use?
@siddhant5697
@siddhant5697 4 жыл бұрын
in which software r u coding??
@frankynakamoto2308
@frankynakamoto2308 4 жыл бұрын
Polycode Can the neurons and inputs be placed together, like neurons with much built in data?? Also I need a very powerful neural network for several different purposes, speech, faceID and math solving problems, do you have something that you made that is open source that you can share with me??
@johnc3403
@johnc3403 5 жыл бұрын
"stay with me, it's gonna be ok"... dude, that's such a lovely sentiment. You were born to teach I think, with that ability to keep pupils onboard. Very good video my man, thank you so much..
@mohamedsuhailirfankhazi6628
@mohamedsuhailirfankhazi6628 4 жыл бұрын
My friend, your explanation in 15 minutes gave more clarity to me than hours of crash course tutorials online. So simple and well explained. Awesome stuff my man!
@morphman86
@morphman86 5 жыл бұрын
After watching hyper-advanced tensorflow/keras stock market prediction tutorials for a while, being completely lost, I stumbled on this. I finally, after weeks of trying to learn NN and decades of practical programming experience, understand it. The iterative backpedaling was what confused me with all of those other videos, but taken down to its most simple form, like in this video, I can now see that it's merely looking at what it got, what it was trying to get and make adjustments to the appropriate synapses based on that, then trying again. It's not the maths that confused me, it's how the machine actually learned. And that was perfectly demonstrated in this video. Thank you!
@RandageJr
@RandageJr 5 жыл бұрын
Do you know where I can find these tutorials? It would be very helpful for me, thanks!
@jacobokomo1880
@jacobokomo1880 4 жыл бұрын
kindly feel free to share with us Who was the teacher who took you through the Previous Tutorials. However, This teacher is doing well. Credits 💪
@Swetagovi
@Swetagovi 4 жыл бұрын
B
@morphman86
@morphman86 4 жыл бұрын
@Isaiah _ Neural Network
@KennTollens
@KennTollens 4 жыл бұрын
I agree too. So many videos complicate and dance around simple mechanics. Knowing the flow of the engine and the simple concept of what is happening, the other videos might make more sense now that I can put it into context.
@djjjo6130
@djjjo6130 4 жыл бұрын
“Stay with me, it’s gonna be okay” that makes me feel like I’m actually learning something and not just being told something
@MC_MrOreo
@MC_MrOreo 3 жыл бұрын
(I know I’m late but) Literally came to the comment section about this 😂
@hfe1833
@hfe1833 5 жыл бұрын
What the?....this is it, finally I found good tutorial
@Pancake3000
@Pancake3000 4 жыл бұрын
same lol Ive finally can actually flippin understand thank much +1 sub i can english.
@scottpatterson9136
@scottpatterson9136 4 жыл бұрын
I agree
@koksem
@koksem 4 жыл бұрын
ye someone finally explains what it is XD
@mariomuysensual
@mariomuysensual 3 жыл бұрын
same!
@arifmeighan3162
@arifmeighan3162 3 жыл бұрын
This tutorial is a perfect blend of talking/programming and slides. Its also quick and to the point 8)
@mattisaderp8929
@mattisaderp8929 5 жыл бұрын
"stay with me it's gonna be okay"
@wirly-
@wirly- 4 жыл бұрын
TypeError: '
@henil0604
@henil0604 4 жыл бұрын
@@wirly- Loll
@chinmayhattewar4456
@chinmayhattewar4456 4 жыл бұрын
@@wirly- hahaha
@Awesomer5696
@Awesomer5696 5 жыл бұрын
What a fantastic way of explaining it. Whilst this is obviously not immediately useful, It's a sort of toy approach that gives you a building block to understand the greater scope.
@hdluktv3593
@hdluktv3593 4 жыл бұрын
I watched a lot of videos about Machine Learning because I wanted to unterstand how that works. Non of these Videos explained so good like yours how a neuron and the adjustment actually works. Good work, now I finally understood it.
@brehontechologies
@brehontechologies 5 жыл бұрын
Finally, a clear, straightforward tutorial to code along. GREAT JOB!
@nocopyrightgameplaystockvi231
@nocopyrightgameplaystockvi231 3 жыл бұрын
Line no 16 : synaptic_weights=2 * np.random.random((3,1))-1 this line makes an array of 3X1 or a matrix of size 3X1. I did not understand this line before I tried this line separately. This makes an easy grasp of the random concept, but as I learned in Soft Computing in my Btech, you can directly initialize the weights as 1, which will then get adjusted during training. you can also replace the line with it : synaptic weights=np.array([[1,1,1]]).T THANKS TO YOU for making this short and easy tutorial!
@Retriiiii
@Retriiiii 9 ай бұрын
Hey can you tell me why are we multiplying 2 and subtracting 1?
@nocopyrightgameplaystockvi231
@nocopyrightgameplaystockvi231 9 ай бұрын
​@@Retriiiiiwhere??
@Retriiiii
@Retriiiii 9 ай бұрын
@@nocopyrightgameplaystockvi231 2 * np.random.random((3,1)) -1 ^ ^
@calmo15
@calmo15 5 жыл бұрын
Amazing video, too few sources do the absolute basics. however, can you please crank your volume up!
@shimuk8
@shimuk8 6 жыл бұрын
I joined my university 2 months late, absolutely had no idea how to learn the lost neural network project topic and then I saw your video !!! Thanks a lot dude !!! For saving my semester HAHAHA
@JonasBostoen
@JonasBostoen 6 жыл бұрын
meaaaww hahaha nice, share it to any of your buddies if you think they need it ;-)
@shimuk8
@shimuk8 6 жыл бұрын
@@JonasBostoen Oh yes already did that,,, right now you have blessings of many helpless students LOL
@joesminis
@joesminis 5 жыл бұрын
At the 10 minute mark and I just wanted to say that your explanations are clicking left and right with me thank you!!!!
@robertdraxel7175
@robertdraxel7175 5 жыл бұрын
Most useful video on the internet for a total beginner, for anyone new to AI. Thanks.
@ankitds1369
@ankitds1369 5 жыл бұрын
in output after training : you can use this, and this will round off the decimal as a round off value - print(np.round(outputs,1))
@abdechafineji8782
@abdechafineji8782 5 жыл бұрын
The best one who can give you the right explanation of creating of a neural network from scratch.
@JonasBostoen
@JonasBostoen 6 жыл бұрын
Coding starts at 2:30
@ChillGuyYoutube
@ChillGuyYoutube 4 жыл бұрын
Polycode ping your comment so others will see it!
@du42bz
@du42bz 4 жыл бұрын
@@ChillGuyKZbin maybe his firewall blocks icmp packets
@rr.studios
@rr.studios 3 жыл бұрын
@@du42bz I read that as "pimp packets"
@paulschmidt8742
@paulschmidt8742 5 жыл бұрын
Bro, it was much easier then I thought. Thx for explaining.
@stevesajeev6477
@stevesajeev6477 3 жыл бұрын
Wow... The perfect tutorial.. I have been searching in the internet for a tutorial on how to make neural networks from scratch . now I got it.. this is soo cool... Very detail explanation...
@EricCanton
@EricCanton 5 жыл бұрын
Just a note on sigmoid_derivative, for myself as much as anyone else. Since you're inputting the output of sigmoid to sigmoid_derivative, he's using that sigmoid satisfyies the differential equation y'(x) = y * (1 - y) so we can compute the derivative sigmoid'(x) by inputing sigmoid(x) into [y --> y(1-y)]. That's very clever!
@victoryfirst06
@victoryfirst06 Жыл бұрын
But you should run the outputs through the sigmoid derivative, right? And the outputs are sigmoided by default, so shouldn't you use the sigmoid twice?
@jeffwads
@jeffwads 4 жыл бұрын
It helps to have someone who actually knows how to break a "problem" down to its bare essentials. Excellent work.
@GabrielCarvv
@GabrielCarvv 4 жыл бұрын
Excellent picture
@povmaster235
@povmaster235 3 жыл бұрын
At last... the video that doesn't just explain stuff but, but actually tells you what to do too!
@Oleg-kk6xv
@Oleg-kk6xv 5 жыл бұрын
Thank you very much. I constantly see these videos about the theory of Machine Learning and AI but I have never found an in-depth start from scratch tutorial with mo libraries, all while explaining everything. Thank you!
@MsRAJDIP
@MsRAJDIP 5 жыл бұрын
So far the best simplest and practical tutorial I got. U cleared all my doubt and little background in python helped me lot.
@rahulaga
@rahulaga Жыл бұрын
This is by far the best explanation. I guess by keeping the complexity level of chosen example pretty low, you landed the message perfectly, thanks !!
@notyourtypicalanime7475
@notyourtypicalanime7475 3 жыл бұрын
This is what I'm looking for, on how to train your datasets by adjusting weights. Thank you so much!
@sreedeepsreedeep2260
@sreedeepsreedeep2260 5 жыл бұрын
Best tutorial on neural networks i have seen till now....thanks buddy😘
@k.chriscaldwell4141
@k.chriscaldwell4141 5 жыл бұрын
Superb! Using the seeded weights so that you and the viewer get the same results was a brilliant touch. Helps the viewer know if he miscoded or not. Thanks.
@samayvarjangbhay8987
@samayvarjangbhay8987 5 жыл бұрын
finally a properly structured tutorial
@0siiris
@0siiris 5 жыл бұрын
Nice profile pic 😂
@novi0
@novi0 Жыл бұрын
2 minutes in and I already have a better understanding than 2 semesters worth of lectures
@ciencialmente9969
@ciencialmente9969 4 жыл бұрын
1:39 "so we need a little meth"
@Loading-tr7yv
@Loading-tr7yv 4 жыл бұрын
I think we all do
@astrainvictum9638
@astrainvictum9638 4 жыл бұрын
Adderall is good for that
@deekshithtirumala6474
@deekshithtirumala6474 3 жыл бұрын
It's math LoL 😛
@coleboothman1158
@coleboothman1158 5 жыл бұрын
Hey dude just saw this video from your post on /r/programming - This video is awesome! You're great at explaining everything. Neural nets can sometimes be confusing but this makes a lot of sense to me. Thanks so much!!
@botancitil92
@botancitil92 2 жыл бұрын
I have been looking for a toy example of Neural Networks, thanks to your video I get to see one. Your video is very concise. Thank you. Also, thank you for sharing your Python code.
@akmaleache4735
@akmaleache4735 6 жыл бұрын
I watched lot of Ann videos on KZbin, and all of them missing something which I am not getting But thanks to you I got what I need. Especially explaining the working. Thank u again
@JonasBostoen
@JonasBostoen 6 жыл бұрын
Akmal Eache thanks man
@timothec.8216
@timothec.8216 5 жыл бұрын
Thanks a lot. This is much more comprehensible than all I have watched and read
@mwont
@mwont 5 жыл бұрын
Just a note: sigmoid_derivative is based on the exact analytical formula for the sigmoid derivative.
@sonic597s
@sonic597s 4 жыл бұрын
thanks so much for this, I was really confused during that bit!
@pluronic123
@pluronic123 4 жыл бұрын
@@sonic597s dont get it. He still uses x(1-x) which has nothing to do with sigmoid, but it is just an approximation to the shape of the curve (signs are opposit)
@sonic597s
@sonic597s 4 жыл бұрын
@@pluronic123 a derivative finds the slope of the line at some given point. the sigmoid derivative being the formula x(1-x) (where x is the sigmoid fn.) means that if you were to plug in some sigmoid function given some value (z) as x, you would get the slope of the sigmoid fn at that value (z)
@pluronic123
@pluronic123 4 жыл бұрын
@@sonic597s thanks precious internet dude
@REVscape95
@REVscape95 6 жыл бұрын
waiting for the next video, this type of explanation really helps
@JonasBostoen
@JonasBostoen 6 жыл бұрын
I've uploaded it!
@KomputasiStatistik
@KomputasiStatistik 4 жыл бұрын
The best neural network hands on
@harlongbitimung4108
@harlongbitimung4108 5 жыл бұрын
This video has taught me more than anything about ANN.
@ogregolabo
@ogregolabo 5 жыл бұрын
Thanks for great video! Possible code to find output for [1,0,0] : p_in=np.array([1,0,0]) p_out=sigmoid(np.dot(p_in, synaptic_weights)) print("Predicted Output After Training:") print(np.round(p_out)) => Predicted Output After Training: [1.]
@chessprogramming591
@chessprogramming591 4 жыл бұрын
Man, this was so to the point! Thanks for your efforts. Best NN basics tutorial I've found so far! Very very useful!
@StreetArtist360
@StreetArtist360 3 жыл бұрын
Simple, Clear and straight to the point. Great Job!!!
@aaronisinjapan
@aaronisinjapan 5 жыл бұрын
Wow, I’ve been looking for a tutorial just like this for a long time! Subscribed! Please keep making videos!!
@drakemeyers8746
@drakemeyers8746 5 жыл бұрын
So i tweeked training outputs to 1,1,1,0 with an interation in range of 100,000 and the computer gave me a perfect answer to the third output of 1. The other outputs where close to true answers but i didn't think the computer could give a 100% true answer. I guess im confused that it didn't take that many training loops to give that answer. Btw great video finally got me to get the computer out and start!
@Democracy_Manifest
@Democracy_Manifest Жыл бұрын
This video deserves an award
@deanresin4804
@deanresin4804 5 жыл бұрын
This was a such a great tutorial. Very clear, concise and well paced.
@fiveoneecho
@fiveoneecho 5 жыл бұрын
Great tutorial, but I might have used a different approximation for d-sigmoid. I'm not sure where you got x(1-x) from as an approximation- it does not share a derivative with d-sigmoid and the vertex is off in space. I'm not sure if it is a standard to use and I'm just misunderstanding (I'm watching this tutorial to learn, after all), but I did a quick Taylor polynomial approximation and got the function: d-sigmoid ~= (2 - x^2) / 8 -------This won't work very well for things not centered at x = 0 This is about the same in terms of typing effort and computer processing, but a little more accurate. It is also based around x = 0 so it won't be biased towards one outcome (unless you built a weight into your function, in which case it makes a lot of sense). You can continue on to the 4th derivative in the series and add a third term which doesn't factor as nice but is extremely accurate (+/- 0.001) on the domain -1
@BeSharpInCSharp
@BeSharpInCSharp 4 жыл бұрын
Lots of people can code only few can teach.. well done
@volador2828
@volador2828 4 жыл бұрын
Nice work! Finally found someone that can teach the way I can understand it.. I subscribed and look forward to watching all your videos!
@karim741
@karim741 5 жыл бұрын
Thanks for the video, I try to follow this but I see the solution can be other way in binary logic, the first column is multiplied by the sum of the two other columns, not only first column is what decides the output but the others also as bellow. if we take this table at 0:20 Example 1: 0x(0+1)=0 Example 2: 1x(1+1)=1 Example 3: 1x(0+1)=1 Example 4: 0x(1+1)=0 New situation: 1x(0+0)=0
@computerguy7451
@computerguy7451 3 ай бұрын
Before I slightly understood how neural networks work, now I understand how they work slightly better than before.
@aizej9896
@aizej9896 4 жыл бұрын
thx for the totorial gived the neural network my own training data and it worked geat!
@industrialdonut7681
@industrialdonut7681 4 жыл бұрын
15 minute video... takes me 2 hours to get through XD
@blubaylon
@blubaylon Ай бұрын
This is such s good tutorial!!! I finally understand how these things are actually coded!
@critterpower
@critterpower 5 жыл бұрын
Great tutorial, better than the usual,"Just use this library...."
@landaravi
@landaravi 4 жыл бұрын
This is the tutorial actually I'm searching for understanding of Neural network... Thanks a lot...
@progmaster15
@progmaster15 4 жыл бұрын
Dude this video was really helpful! Thank you for explaining the basics of neural networks! :D
@quasistarsupernova
@quasistarsupernova 2 жыл бұрын
One of the best coding videos!
@ShradhanandVerma
@ShradhanandVerma 4 ай бұрын
THANKS FOR VERY SIMPLE WAY TO EXPLAIN... FINALLY UNDERSTOOD.
@warrenkuah4314
@warrenkuah4314 3 жыл бұрын
Incredible! I think this is the first video that has helped me understand the formulas behind a neural network! However, I was wondering how you implement the calculation of biases into the actual code and Backpropagation steps and formula?
@title601a
@title601a 5 жыл бұрын
NICE!!!!! Finally, I can understand what is NN and backpropagation. Simple and Easy to understand. Thank a lot to Polycode :)
@SureshSingh-en5uj
@SureshSingh-en5uj 4 жыл бұрын
FINALLY!!.... I have been looking for such tutorial which teaches from scratch... That's Very good of you to do so... Keep it up bro.. Make more videos like this... BTW I am new to your channel. Just subscribed
@travisjol
@travisjol Жыл бұрын
Finally a video I can understand! Thank you
@Pancake3000
@Pancake3000 4 жыл бұрын
This is the thing that finally helped me understand! Never stop doing the grade vids!
@MCLooyverse
@MCLooyverse 4 жыл бұрын
If Φ(x) = 1 / (1 + e^(-x)), then Φ'(x) = e^(-x) / (1 + e^(-x))^2, not x(1 - x). I'm curious about your Atom setup. Are the text overview on the side and the code suggestions hidden in Atom somewhere, or are they plugins?
@gamescript6449
@gamescript6449 2 жыл бұрын
huh
@reddinghiphop1
@reddinghiphop1 4 жыл бұрын
This video is 100% gold, thank you !
@alidakhil3554
@alidakhil3554 4 жыл бұрын
That is best empirical lesson on basic NN
@madanvishal1
@madanvishal1 5 жыл бұрын
Excellent Explanation making things crisp and clear
@elephant1989811
@elephant1989811 5 жыл бұрын
what a excellent explanation of complex subject! Please keep up the videos.
@traeht
@traeht 2 жыл бұрын
Thank you for very useful insigth into what is behind the neural network. At 10:00: (the derivative of a sigmoid function)=(sigmoid funcion)*(1-sigmoid function) and not x(1-x)
@MrFrostsonic
@MrFrostsonic 5 жыл бұрын
In line 16, why have you multiplied the random weights by 2 and then subtracted 1 ? Great video .. very helpful .. Thank you very much.
@JonasBostoen
@JonasBostoen 5 жыл бұрын
np.random.random returns floating point values between 0 and 1, but since we need values between -1 and 1, this is the way to do it.
@nurhaida1983
@nurhaida1983 5 жыл бұрын
@@JonasBostoen thank you for this clarification. i was lost at this line but luckily stumbled to this comment. thank you very much! cheers!
@BiCool03
@BiCool03 4 жыл бұрын
@@JonasBostoen I'm very late to the party, but since we need a random number between -1 and 1, wouldn't it be better to add two random numbers, then substract 1, or does it matter?
@dridihamza7157
@dridihamza7157 4 жыл бұрын
this is on of the best yet simple explanation. keep up
@jefersonferri
@jefersonferri 2 жыл бұрын
You did a great job, you should make more videos. May be explaining how to make a more complex neural network.
@thegoonist
@thegoonist 5 жыл бұрын
0:38 the rule could also be that that first and third outputs have to be 1, and not just the 1st output.
@vikrantrangnekar4678
@vikrantrangnekar4678 4 жыл бұрын
Precisely, i thought the same too
@TheLolfaceftwOfficial
@TheLolfaceftwOfficial 4 жыл бұрын
I have no idea what I’m doing.
@yasirfaizahmed2003
@yasirfaizahmed2003 4 жыл бұрын
Very underrated channel..
@nukzzz5652
@nukzzz5652 4 жыл бұрын
There is something i'm not understanding, when its time to change the weights, you're supposed to multiply the input with the adjustment and add it to the weights right? doesn't that mean if the input is 0 then the weights wont change at all? i noticed this when i tried different inputs and outputs, your example works fine but when i tried {0,0,0},{0,1,0},{0,1,1},{0,0,1} as inputs and {0,0,0,0} for outputs it was a mess and no matter how many tests i did it couldnt figure out the correct answer
@sonic597s
@sonic597s 4 жыл бұрын
it does, this is a mistake in the code and can be fixed if you add a learning rate variable to multiply by the adjustments, rather than using the training inputs.
@sonic597s
@sonic597s 4 жыл бұрын
@@havoc3135 instead of dotproducting the (transposed) training inputs with the adjustments, multiply the adjustments by some scalar, so you can scale your adjustments manually. hope this helps
@shohelmojumder2329
@shohelmojumder2329 4 жыл бұрын
such a great way to teach beginners
@utkarshankit
@utkarshankit 5 жыл бұрын
first time i understood back propagation from your video.
@marcusaureliusregulus2833
@marcusaureliusregulus2833 4 жыл бұрын
Output = array[1[1]].value Lol just kidding. This was a great video and I understood a ton
@chandlerlabs2478
@chandlerlabs2478 2 жыл бұрын
Completely new to this and you made it very easy to understand. Thank you and good job!
@diljithpk1615
@diljithpk1615 2 жыл бұрын
Nice presentation. Made it feel very simple
@nooraalameri6938
@nooraalameri6938 4 жыл бұрын
Excellent explanation!!!!!! Thank you very much
@MikhailBortsov
@MikhailBortsov 3 жыл бұрын
Thanks for thinking about the equality of random weight for us.
@xddddddize
@xddddddize 4 жыл бұрын
For this simple problem backpropagation is not needed. The gradient formula can be computed analytically and would reduce the training iterations a lot. (I achieved high confidence with 500 iterations only)
@tommygun296
@tommygun296 4 жыл бұрын
Very clean! VERY NICE! 🙏😍 Great Video! 😊😊😊 thank you
@trianglesupreme
@trianglesupreme 5 жыл бұрын
At 0:40 , The output depends on both first and last input not only on first. If i label the inputs a,b,c from left to right respectively, then according to the 4 states truth function, the output is O= abc + ab'c =ac(b+b') =ac. So nn output for 100 input should be 0.
@lifeisstr4nge
@lifeisstr4nge 2 жыл бұрын
50 seconds in - already more clear than most """""explainers"""""
@KonradGebura
@KonradGebura 4 жыл бұрын
Thanks this was so helpful it really cleared up a lot of my questions about the topics other videos said let’s not talk about that yet..., thanks again these videos are super helpful keep up the amazing work
@SuryaPrakashVIT
@SuryaPrakashVIT 4 жыл бұрын
Wonder full video, this will definitely turn upside down of my project. Thank You so much!!! :)
@niyamagaming8187
@niyamagaming8187 4 жыл бұрын
Thanks, now i got the actual thing that how its works.
@prasanjitrath281
@prasanjitrath281 5 жыл бұрын
Your video is a life saver, thanks! Hope you make more such videos!
@Rekefa
@Rekefa 5 жыл бұрын
What a great video! Keep up with the good work, thanks for sharing your knowledge
@ujjwalchetan4907
@ujjwalchetan4907 3 жыл бұрын
Good explanation. Valuable content.
@Adam-ze3pr
@Adam-ze3pr 3 жыл бұрын
Hai, thank you, this is very easy to catch for newbie like me. Simple and clear. Keep going 👍
@imdadood5705
@imdadood5705 3 жыл бұрын
Dude!!! This is enlightening! Thanks
@gabrilrh
@gabrilrh 5 жыл бұрын
i need more, thats awesome
@OMAAKAAKORJOHN
@OMAAKAAKORJOHN 7 ай бұрын
you are so wonderful , i quite understand by you basic and easy to learn method, thanks
Create a Simple Neural Network in Python from Scratch - Part 2
11:43
Creative Justice at the Checkout: Bananas and Eggs Showdown #shorts
00:18
Fabiosa Best Lifehacks
Рет қаралды 19 МЛН
Twin Telepathy Challenge!
00:23
Stokes Twins
Рет қаралды 125 МЛН
Симбу закрыли дома?! 🔒 #симба #симбочка #арти
00:41
Симбочка Пимпочка
Рет қаралды 5 МЛН
I Built a Neural Network from Scratch
9:15
Green Code
Рет қаралды 428 М.
Neural Network Learns to Play Snake
7:14
Greer Viau
Рет қаралды 4,5 МЛН
How to Create a Neural Network (and Train it to Identify Doodles)
54:51
Sebastian Lague
Рет қаралды 1,9 МЛН
15 Python Projects in Under 15 Minutes (Code Included)
12:37
Tech With Tim
Рет қаралды 2 МЛН
Viral logic puzzle from China
9:08
MindYourDecisions
Рет қаралды 73 М.
Video Game Cities Are Weird
22:49
Razbuten
Рет қаралды 354 М.
Neural Networks Explained from Scratch using Python
17:38
Bot Academy
Рет қаралды 349 М.
Neural Network from Scratch | Mathematics & Python Code
32:32
The Independent Code
Рет қаралды 138 М.
Creative Justice at the Checkout: Bananas and Eggs Showdown #shorts
00:18
Fabiosa Best Lifehacks
Рет қаралды 19 МЛН