Part 3 will be on backpropagation. I had originally planned to include it here, but the more I wanted to dig into a proper walk-through for what it's really doing, the more deserving it became of its own video. Stay tuned!
@HeyItsSahilSoni7 жыл бұрын
Can you provide some link to the training set? I'm quite new and I'm trying to learn this "Hello World" of NN,
@pinguin10097 жыл бұрын
Did you consider to do a part about phase functioned neural networks? Would be awesome!
@nyroysa7 жыл бұрын
As the part progresses, we're getting closer to seeing that "lena.jpg" picture
@akhileshgangwar3947 жыл бұрын
You are doing very good job , lots of hard work behind this video , i salute your hard work thanks
@mynameisZhenyaArt_7 жыл бұрын
So have you decided to do more of these videos? There is a line with CNNs and LSTMs in video series...
@snookerbg7 жыл бұрын
One of youtube's highest quality content channels! Chapeau
@Dom-nn1kg7 жыл бұрын
Kosio Varbenov +
@yoavtamir77076 жыл бұрын
True!
@JD-jl4yy6 жыл бұрын
'chapeau'
@achillesarmstrong96396 жыл бұрын
agree
@tubbysza6 жыл бұрын
Why you write "chapeau"? Hat in French. What does hat in that sentence do?????Be honest tho
@DubstepCherry4 жыл бұрын
I'm an IT student, and we have an Assignment on exactly this topic. We even have to use the MNIST data set. I have to say, this is absolutely lifesaving and I can not thank you enough Grant. What you do here is something that only a handful of people on this planet can do, explain and visualize rather complicated topics beautifully and simple. So from me and A LOT of students all around the globe, thank you so so much
@vsiegel3 жыл бұрын
Yes, it is just extremely good, in an objective way. He is brilliant at it, and spends a lot of time on each video. If there is an explanation of something by 3blue1brown, you will not find anything explaining it nearly as good.
@johndough5102 жыл бұрын
@@vsiegel bro you guys are so much smarter than i am im jealous
@vsiegel2 жыл бұрын
@@johndough510 If you are thinking about how smart you are, you are probably smarter than you think. No worries.
@johndough5102 жыл бұрын
@@vsiegel thanks for being so cool about it man, hope you have a good one
@alfredoaguilar20762 жыл бұрын
@@johndough510 I'm kill
@plekkchand4 жыл бұрын
Unlike most teachers of subjects like this, this gentleman seems to be genuinely concerned that his audience understands him, and he makes a concerted and highly successful effort to convey the ideas in a cogent, digestible and stimulating form.
@hamidbluri31352 жыл бұрын
TOTALLY agreed
@redflipper992 Жыл бұрын
concerted with whom? I don't think you understand how to use that word.
@werwinn Жыл бұрын
he is a true proffesor!
@macchiato_1881 Жыл бұрын
@@redflipper992 I don't think you understand what concerted means. Stop trying to act smart and think you're better than everyone here. Be humble. You are irrelevant in the big picture.
@JeffCaplan313 Жыл бұрын
@@redflipper992I read that as "concerned"
@melkerper5 жыл бұрын
Dissapointed you did not animate a 13000-dimensional graph. Would make things easier
@havewissmart96025 жыл бұрын
No.... No it would not....
@mrwalter10495 жыл бұрын
A 2-dimensional projection of a 13000-dimensional graph would probably look like a pile of garbage.
@cchulinn5 жыл бұрын
If 3Blue1Brown cannot animate a 13000-dimensional graph, then noone can.
@tehbonehead4 жыл бұрын
@@mrwalter1049 You're not thinking fourth-dimensionally!!
@mrwalter10494 жыл бұрын
@@tehbonehead No-one is. We're trapped in three dimensions. That's why you could never imagine what a 4-dimensional cube looks like. Making a 4-dimensional projection of a 13000-dimensional object isn't significantly better than 3 dimensions. If you meant to be humorous I hope someone gets a chuckle, because I didn't. Then your effort won't be in vain. Have a nice day 🙂
@Shrooblord7 жыл бұрын
I'm only 12 minutes into this video right now, but I just wanted to say how much I appreciate the time and spacing you give to explaining a concept. You add pauses, you repeat things with slightly different wording, and you give examples and zoom in and out, linking to relevant thought processes that might help trigger an "a-ha" moment in the viewer. Many of these "hooks" actually make me understand concepts I've had trouble grasping in Maths, all because of your videos and the way you choose to explain things. So thanks! You're helping me a lot to become a smarter person. :)
@Dom-nn1kg7 жыл бұрын
Shrooblord +
@luke75036 жыл бұрын
yes.
@Appscaptain6 жыл бұрын
Totally agree!
@atulct5 жыл бұрын
Absolutely agree
@CaerelsJan5 жыл бұрын
Couldn't agree more
@colonelmustard7078 Жыл бұрын
Not only the videos themselves are great on this channel but the lists of the supporting materials are amazing too! Drives me down a breathtaking rabbit hole every time! Thank you!
@bikkikumarsha6 жыл бұрын
You are changing the world, shaping humanity. I wish you and your team, happy and peaceful life. This is a noble profession, god bless you guys.
@jasonzhang65349 ай бұрын
my professor has explained this in 3 lectures for about 6-7 hours. 3B1B explained it in 30 mins and it is much more clearer. I can now visualize and understand the what/why/how behind the basic deep learning algorithms. Really appreciate it!!!
@bradleyhill54937 ай бұрын
Same!
@Nyhilo7 жыл бұрын
After watching your first video, I ended up drawing a "mock" neural network up on paper that would work on a 3x3 grid (after all what else are you supposed to do during a boring lecture class?). It was supposed to recognize boxes, x's, sevens, simple shapes, and I defined the 7 or so neurons that I thought it might need by hand. I did all the weighted sums and sigmoid functions on paper with calculator in hand. It took maybe an hour and a half to get everything straight but once I did, it worked. It guessed with fairly good accuracy that the little seven I "inputted" was a little seven. All that excitement because of your video. Later that evening and the next one, I tried to program the same function taking PNGs as inputs and definitions of the neurons and it honestly was only a little more rewarding. But now that I see what the hidden neurons *actually* look like, I only want to learn so much more. I expected the patterns to be messy, but I was really surprised to see that it really does almost look like just noise. Thank you for making these videos. I find myself suddenly motivated to go back to calculus class tomorrow and continue our less on gradients. There's just so much out there to learn and it's educators like you that are making it easier for curious individuals like me to get there.
@3blue1brown7 жыл бұрын
That's so cool, thanks for sharing! I didn't expect anyone to actually go an play with it by hand, but simplifying down to a 3x3 grid seems really smart. Stay curious!
@Dom-nn1kg7 жыл бұрын
Nyhilo +
@Dom-nn1kg7 жыл бұрын
3Blue1Brown +
@jayeshsawant67346 жыл бұрын
Were you able to do all of that by watching this video series alone? Please can you add other resources you referred? Thanks!
@michaelhesterberg7026 жыл бұрын
.....go anD play... AND AN+D
@kraneclaims4 жыл бұрын
I just sat through a 3 day ML accelerator class and you series did a far better job at explaining them with 4 twenty minute videos. Well done mate. Really appreciate it. Thank you
@NoobJang Жыл бұрын
this youtuber is the best in maths and engineering in general i have never been so astounded for how easy learning machine learning can be, without having to take in bunch of complex topics that doesnt add to the discussion. Like most of the courses try to make you understand various different complex topics and by the time you finished it, you will ahve forgotten mostly about machine learning. Why dont you just explain the catch for each concept then allow us learn it in depth afterwards like these channels only explaining the concepts with both ease of learning and depths are the best.
@dsmogor6 ай бұрын
I think what puts this material apart from the competition is the authors intuition of the focal points where the audience might loose the plot. Then he takes a patient and systematic turn to reiterate what have been learned so far to reinforce the basics to decrease the cognitive leap needed to grasp the next step. This ability is in my experience pretty unique.
@seC00kiel0rd7 жыл бұрын
My math career is over. Once I learned about gradient descent, it was all downhill from there.
@rlf41607 жыл бұрын
I had a similar fate, except mine went negatively uphill.
@yepyep2666 жыл бұрын
just remember there are people in an even lower minima than you are.
@jomen1126 жыл бұрын
Yea, but making random choices makes you eventually reach the bottom.
@thetinfoiltricorn77976 жыл бұрын
It's all planar vectors from here.
@nateschultz89735 жыл бұрын
You just need to take a few steps back and turn your life around.
@hikaruyoroi7 жыл бұрын
I love you so much. I'm taking multivariate calculus and I'm doing some neural network work right now, and none of my teachers have the passion nor the capability to teach as well as you. You help me keep my passion for learning alive
@ssa31016 жыл бұрын
Incompetence galore.
@JockyJazz3 жыл бұрын
3:38 you missed the chance of using the meme *"AI: I've found an output, but at what cost?"*
@hangilkim2455 жыл бұрын
"But we can do better! Growth mindset!" at 5:18 .... a wholesome intellectual i love to see it
@souvikroy75704 жыл бұрын
Hands down, I have never seen anyone explain mathematics so beautifully the way he does. Kudos!
@welcome2bangkok-d1x Жыл бұрын
i have no words to describe how thankful i am. thank you so much for such great content.
@Skydmig7 жыл бұрын
That end comment with Lisha Li really points out how important it is to put a lot of effort into gathering and creating good and structured data sets. I know it's cliché to state "garbage in, garbage out", but these findings put very precise context and weight to this particular issue.
@atlas74257 жыл бұрын
Haha, "weight".....get it?
@theespatier44566 жыл бұрын
StiffWood True. This also becomes ethically important in medical applications of AI, where poor input can create racist AI and the like.
@cody._.--._.--.5 жыл бұрын
I wish someone would have introduced this to me at a young age back in the 90s. I had no idea neural network have existed for so long
@lopezb4 жыл бұрын
Now it's easier to explain. He couldn't have made a video like this back then, both because KZbin didn't exist, and all the relevant stuff would be in technical papers...
@Djorgal4 жыл бұрын
@@lopezb Also, it was a really niche field that didn't show that much promise.
@damienivan89464 жыл бұрын
Also, from my understanding, modern neural networks are very different from the one in the 90s
@bubblelyte4014 жыл бұрын
It's a college graduate course.
@georgalem33104 жыл бұрын
In the 90s NN fell into disfavor.
@imad_uddin3 жыл бұрын
Cant believe you explained this so easily. I thought it would take me ages to wrap my head around what neural networks basically are. This is truly amazing explanation!
@obsidianblade42286 жыл бұрын
Did anybody else feel bad for the network after he called the output utter trash?😢
@sarahmchugh41695 жыл бұрын
I know, especially with those sad computer eyes. Tragic
@MarioRodriguez-or9fn4 жыл бұрын
Yes, specially when he called it bad computer :(
@DavidLee89814 жыл бұрын
we are all utter trash for future robots
@theshermantanker70434 жыл бұрын
Bruh there's literally Reinforcement Learning where the Network is tortured by the researchers when it gets a wrong answer and the torture continues until it gets the right answer lol
@David5005ful4 жыл бұрын
Lmao.
@darshita12703 жыл бұрын
Math courses in my college are basically trash compared to your videos , finally now I understand how math is being applied in computer science . Thank you so much for teaching in such an illustrative way .
@matteo786123 күн бұрын
I’m a grad student in physics and i wanted to thank you. It is insane to find such good videos on such advanced subjects !
@andrasiani7 жыл бұрын
how can anyone dislike these videos? very detailed, accurate explanations and cool animations. Keep up the good work!!
@syedabdulsalam46595 жыл бұрын
stupids are everywhere.
@pseudo_goose4 жыл бұрын
Some thoughts on the results: 1. 14:01 The weights for the first layer _seem_ to be meaningless patterns when viewed individually, but combined, they do encode some kind of sophisticated pattern detection. That particular pattern detection isn't uniquely specified or constrained by this particular set of weights on the first layer; rather, there are infinitely many ways that the pattern detection scheme can be encoded in the weights of this single layer. These infinite other solutions can be thought of as the set of matrices that are row-equivalent to the 16x700ish matrix where each row is the set of weights for each of the neurons on this layer. You can rewrite each of the rows as a linear combination of the set of current rows, while possibly still preserving the behavior of the whole NN by performing a related transformation to the weights of the next layer. In this way, you can rewrite the patterns of the first layer to try and find an arrangement that tells you something about the reasoning. Row reduction in particular might produce interesting results! 2. 15:10 I think I understand the reason why your NN produces a confident result - it's because it was never trained to understand what a number _doesn't_ look like. All of the training data, from what I can tell, is numbers associated with 100% confident outputs. You'd want to train it on illegible handwriting, noise, whatever you expect to feed it later, with a result vector that can be interpreted as 0% confidence, by having small equal weights, having all weights to zero, or maybe an additional neuron that the NN uses to report "no number".
@watchm4ker2 жыл бұрын
2 is a painfully easy mistake to make, because it requires the human assembling the programming data to think outside the box.
@josboersema1352 Жыл бұрын
Quite hard to read your comment, but it seems that we have the same idea: the neural network _is_ detecting smaller elements like "edges and loops" (as video author puts it), assuming those pictures 14:01 are of the actual results. The next layer then starts combining these elements, and it seems that if you stare at it long enough you can almost start guessing what it might be doing, like combining row 1 column 2 = strong + row 3 column 1 = strong + row 3 column 4 = strong + row 2 column 4 = weak + row 1 column 4 = weak, you might be going toward an 8 on those counts, and depending on some other combination of pattern strengths it might be a 6 or 9 if there is absence of signal upper/right or lower/left given by some of those patterns which are matched with the input. This is almost certainly not accurate as an example, but it seems to be the theme of how it works. 16:05 _"... picking up on edges and ... not at all what the network is doing."_ This statement in the video seems wrong. P.S. If this first part above is true, then the neural network might be capable of drawing a 5 (15:39). You just have to extract that answer in the way that it is in there, which is a bit more involved that following it's normal operation for which it is built. If you look into what combination of patterns from the first layer output, in what strengths, leads to a number (5 for example), than you could superimpose those patterns unto each other, and that would be what this neural network thinks is that number. It shouldn't be too hard to write a function to the already trained network, to draw this out.
@DavidMauas-j6t14 күн бұрын
you know something? your channel is the best math teaching I have ever laid eyes upon. it is brilliant. beyond amazing. everything is meticulously choreographed to perfection. I WISH I could have learned math from someone like you when I was younger, and I am so happy I get to have the occasional brush with your videos. they are sublime.
@Shubhi021 Жыл бұрын
This video is truly a work of art. The animations are mesmerizing. Thank you for all your work, Grant!
@sweepy845 жыл бұрын
You sir deserve a noble, or an oscar...what an incredibly effective method of teaching. thank you so very much!!! NO! BAD COMPUTER! made me crack up! lol
@shakhaoathossain50323 жыл бұрын
A balon di or too
@sohambhattacharjee9512 жыл бұрын
@@shakhaoathossain5032 XD good one.
@Phoenix-nh9kt2 жыл бұрын
@@shakhaoathossain5032 add a grammy in there too hahaha
@ChemEDan2 жыл бұрын
The fact it was recorded digitally meant he said that to a real computer.😭 AND SO DID YOU!!! 😠
@mashab91294 ай бұрын
after looking though many udemy, orelilly and other youtube videos finally found this one - beginner friendly but on a profound enough level, explained in a comprehensible way, that does not lose you in the middle because it jumps from abc to hard concept - this channel is a gem. thank you!
@GAment_116 жыл бұрын
When I watch your videos, all I want to do is keep going. Thanks for motivating me, as well as others, with your amazing content. I really appreciate it.
@bishalthapaliya40694 жыл бұрын
Probably, even a 5 year old would master deep learning when taught in this way. What a video man ! Awesomeeeeeeeee
@Heisenberg3552 жыл бұрын
This man is a living legend. I really sincerely believe he's one of the best "explainers" for many complex mathematical topics. I found your channel because of linear algebra, and now I'm relieved whenever I search for a topic and see one of your videos. You truly are the master of your league
@superj1e2z67 жыл бұрын
Watching 3b1b Step 3b. Drop Everything Step 1b. Watch religiously.
@jonasvanderschaaf7 жыл бұрын
oh the accuracy of this comment
@spiderforrest78167 жыл бұрын
My god I relate
@Petch857 жыл бұрын
for me it is. step 3b: make sure you are ready. you need to be 100% focused. step 1b: Watch it critically, be sure not to strengthen your miss believes. If it seems simpel and obvious I am probably misunderstanding it.
@fossilfighters1017 жыл бұрын
+
@Cosine_Wave7 жыл бұрын
counting level: Parker
@tonraqkorr2306 жыл бұрын
We need AI to recognise what the doctors write
@frankchen42293 жыл бұрын
whoever designs the algorithm and engineers the software deserves a nobel peace prize
@flyinglack3 жыл бұрын
@@frankchen4229 LOL
@johnbarbuto53873 жыл бұрын
Who writes any more??? That horse left the barn a long time ago. Besides, we are no longer doctors. Courtesy of insurance companies we are "providers". (The same strategy of devalued identities has long been used by invading armies to anonymize those being conquered, an apropos metaphor.)
@MrWite13 жыл бұрын
@@johnbarbuto5387 why so mad
@centerfield63392 жыл бұрын
@@johnbarbuto5387 not courtesy of insurance companies; courtesy of the fact that healthcare needs to be paid for. State systems are also payer systems.
@gersonribeirogoulart9895 Жыл бұрын
When something is amazing, it will look like with your work. Even your bg voice is totally understandable, legit and direct
@musthavechannel52626 жыл бұрын
"I'm more of a multiple choice guy" LOL
@Tri_3st7 жыл бұрын
Hi 3B1B, as a technical physics student, beeing interested into this topic for quite a while now, and also enjoying your content for quit a while, i really wanna thank you for not only going into this topic particulary, but also for educating a relatively large audience with your informative videos and improving the interest into mathematical sciences for a lot of people including me, which is pretty important in my opinion! Keep it up!!
@S8EdgyVA2 ай бұрын
I just adore the idea of making a function whose input is the multiple parameters of a function which takes a certain series, and the output of is a 3rd function that tells you how close the output of the first function was to a certain potential output It sounds complex but it’s actually both simple AND beautiful once you understand the concept
@olesyabondar48264 жыл бұрын
The graphics of this video is absolutely stunning! Thank you for your work ♡
@Iextrimator7 жыл бұрын
Absolutely love your videos! I'm trying to show this video to my friends who doesn't know English so well, and I decided to make subtitles. Hope you approve them, I really want to spread word about your work.
@VincentKun2 жыл бұрын
I saw this video when i know nothing and i had a lots of intuitions, I'm rewatching after studied a lot more and I'm still learning a lot. You're a great teacher
@amagicpotato55117 жыл бұрын
Hi 3b1b i love your vids and they are one of the reasons why I know so much of how the universe works. Your channel inspires me to know more and you show the beauty of all of it. Please dont ever stop making these videos, you are making so many lives greater.
@3blue1brown7 жыл бұрын
Thanks so much Amagic, I'll do my best.
@charliedexter32027 жыл бұрын
Hello....I would like to learn how to make these animations....I teach math, particularly statistics at the graduate level, and I find the way you make numbers illustrate the idea actually helps understand the flow of parameters in question in a much better manner . Do give me certain leads so that I can pick this up. I shouldn't have problems programming once I know which platform to work with
@neerajtiwari53657 жыл бұрын
@Amagic potato, For the world of me, I cannot possibly figure out how watching 3blue1brown's videos helped you attain that enlightenment about How The Universe Works...
@flumsenumse7 жыл бұрын
+charlie dexter He has all of the code for his animations (written in python) here: github.com/3b1b/manim
@SPYTHandle5 жыл бұрын
How confident I feel in my current knowledge of neural networks: 15:41 - *"Uh...I'm really more of a multiple choice kinda guy."*
@natchu963 жыл бұрын
The neural networks themselves generally feel the same, so at least we won't be alone in that sentiment. Assuming thinking rocks and metal count as good companionship at any rate.
@mtrifiro8 ай бұрын
I am so happy I discovered this today. I ignored all (well, most) of the math, and I still came away with a pretty solid understanding of how it works. Your explanations are ridiculously clear; you have a gift.
@skintaker19494 жыл бұрын
So uhhhh, did you just say that this was the "Hello World!" of neural networking.....
@GabrielCarvv4 жыл бұрын
@Winston Mcgee "p r e t t y m u c h i t"
@rudigerbrightheart73044 жыл бұрын
Well, the data are the hello world, because it is the first image set that people take to test or learn about an algorithm.
@junkailiao4 жыл бұрын
Because you don't need all those knowledge to build a network that can read digits. It's easy with Keras even my grandmother can do it
@domizianostingi95044 жыл бұрын
Yes it is, because the dataset is veerry clean and CNN through Keras is very easy to implement, though you need to have huge background in math and code-writing (I'm a statistician so I have a little bit of both) :)
@namlehai27374 жыл бұрын
Use a package. People already did the hard stuff, you just have to call their function / use their models
@jabug_11445 жыл бұрын
Once I graduate and start working, I’m gonna send you the money I owe you for watching all these videos. I’m doing BSEE for control systems so hopefully it works out.
@johannes523Ай бұрын
This is, like, literally the most important video on the internet.
@jamesluc0077 жыл бұрын
You explained in less than 4 minutes something that took me several days to understand from other sources. You are awesome!
@bytenommer7 жыл бұрын
Could you please just drop everything else you are doing and do these videos full time for the rest of your life.
@iLoveTurtlesHaha6 жыл бұрын
But his videos are a result of his other interests. XD
@raycharlestothebs5 жыл бұрын
@@iLoveTurtlesHaha Just like you saying 'XD' is......
@thibauldnuyten28915 жыл бұрын
Sadly people still need to work to fricking live.
@sgracem28633 жыл бұрын
@@thibauldnuyten2891 Wouldn't he be rich off these videos though? I mean they all have millions of views and he almost has 4m subs
@AarshWankarIITGN7 ай бұрын
Thanks a lot, at 2:45 AM in the morning, sitting peacefully in in the hostel of my institute, you actually cleared a lot of things up in the first two videos. This is the first time I understood to some extent what gradient descents and weights and cost functions were all about. Looking forward to continuing this journey of learning on your awesome channel 😃
@genoir-itsmusicart91696 жыл бұрын
This is mindblowingly interesting and extremely well explained. Thank you!
@blunderbus26956 жыл бұрын
"It's actually just calculus." "Even worse!" i'm dead
@fitokay5 жыл бұрын
Actually, AI just lie to people of the world
@fitokay5 жыл бұрын
so far
@the.abhiram.r3 жыл бұрын
calculus is the easiest form of math
@ahmedezat13533 жыл бұрын
@@the.abhiram.r I wish you are joking
@imtanuki410610 ай бұрын
Possibly one of the best mini-courses on ML anywhere. Clearly explained concepts, beautiful post-production. kudos
@nourddinesofiir35257 жыл бұрын
Thanks for the part 2, I was waiting for it impatiently.
@danielamurphy8560 Жыл бұрын
I'm doing my Masters in applied Econ right now and we briefly went over Neural Networks in my advanced econometrics class. Some of the terminology was a bit different and I felt like I could understand it decently in office hours with my professor, but this was still a great resource to solidify my understanding of the concept. (Also we looked at the MNIST dataset in class too) :D
@eshanhembrom66339 ай бұрын
As someone who asks why for every statement, I appreciate the way you explain the logic behind everything.
@jacoblund82897 жыл бұрын
I love how he has people like Desmos and Markus Persson supporting him on patreon
@cgmiguel5 жыл бұрын
Your videos with such wonderful LaTeX animations are just as high level as a BBC awarded documentaries. Very impressive to say the least.
@soliduscode2 жыл бұрын
I agree. I need to learn more about this LaTex animations
@rob6512 жыл бұрын
I've watched many videos and done some reading on how neural networks work (learn), but I couldn't find a satisfactory explanation until I watched this video. Your examples, analogies, visuals... were just perfect. Thank you so much.
@ajnelson14317 жыл бұрын
3:40 "NO! Bad computer!"
@stydras33807 жыл бұрын
AJ Nelson Bad boy!
@NF307 жыл бұрын
I felt so sorry for the computer...
@johnchessant30127 жыл бұрын
"To say that more mathematically..."
@Shockszzbyyous7 жыл бұрын
i heard eric cartman say it.
@Dom-nn1kg7 жыл бұрын
+
@siddheshmisale39044 жыл бұрын
Would just take a moment here to appreciate the sheer brilliance of Grant on this series. I would not have reached a decent level of NN w/o these explanations and so would so many other people. Single best series on NN / Math out there in general.
@super2662 жыл бұрын
You are the best science teacher I have every seen. If anyone upstairs is serious about our education system they should use your videos as baseline for how to teach properly; you never use a term that wasn't clearly defined prior, you use analogies perfectly, and you tie new technical info back to the original concept thereby making sense how the new info fits together in the larger picture. If my high-school and college teachers were like you I would have done infinitely better at school.
@shaylempert99946 жыл бұрын
Pause and ponder?! Every 10 seconds I stop for a minute of thinking! And on all of your videos! This time I had a time I thought for like half an hour.
@minerawesome287 жыл бұрын
I was looking forward to this video all week.
@navidutube5 ай бұрын
This is simply the best channel on KZbin
@shwetamayekar18635 жыл бұрын
Love the eye/ pi animations! :) Gets me smiling amidst all the complexities of Neural Networks 😲
@rubyjohn6 жыл бұрын
BEST VISUAL THERAPY IN MY LIFE
@WiredWizardsRealm-et5pp4 ай бұрын
Man , it feel so good to learn everything in zero shot now.. the neural networks , gradient decent , backpropagation . I used to get frustrated with lot of challenging concepts.. cuz I did not know maths , and AI terms.. but now after learning it for year it feels worth learning. Thanks to 3Blue guy.. whatever course he touched is worth all lectures combined i can't say. Its just pure core concept with animation. Quality at par
@nigeljohnson98207 жыл бұрын
Humans have a habit of seeing images in random data, such as clouds, craters on the Moon or Mars or hearing voices in random radio static. Is this similar to identifying a 5 in a field of random data?
@5up3rp3rs0n6 жыл бұрын
well for human you see things from the shape or outline that looks like a particular object, kinda like the "See a digit by the loops and lines it has" ideal for this system. So it's all notthe same as that of picking a number and being very confident about it from a static.
@seditt51466 жыл бұрын
But is that what our brain is doing? Is it looking at a static, or are our neurons going .... ok, straight line... then round edge... another round edge..... hmmm that looks like the other 5s I seen... than triggering memory banks to look for other 5s. and again compare.
@jomen1126 жыл бұрын
No. As explained in the video the network been (more) punished for providing multiple answers than single output wrong answers. That means a multiple answer does not exists as an option for the trained network, i.e. the set of output pattern it has been trains to respond with does not contain multiple choices. That is to say, the alternative answer "I dont know" or "maybe this or that" does not exists for the networks as an answer. Regarding clouds or craters, this is not "random data", the shapes we recognize are real and can be agreed upon to exists. This is not the case with noise, i.e. random data. Per definition random data does not not contains pattern and that is why noise carries no meaning to our brains. Regarding hearing voices in random static, I would suspect you only would hear voices if there is a pattern (signal) of some form which the brain pics up on and tries to make sense of. How prone you are to hear an actual voices might depend on how your brain be trained, i.e. biased, to detect voices (for instance if you believe one can communicate with ghosts you might be more prone to hear voices were others hears none). Because in the end, detecting meaning, i.e. label stuff, is all about being biased towards a certain interpretation of reality. So to conclude, the "reality" for the neural network in the video is biased, or limited, towards a singe neuron output and anything it "perceives" will get a response as such. However, human brains are a little bit more complex and biased differently, i.e. wired up in unique ways, which makes up for the diversity in believes and reasoning among people.
@parthshrivastava63256 жыл бұрын
That falls under the imagination bracket,it's more like changing the value of the pixels instead of the weights or biases to get a desired output.
@ottrovgeisha21505 жыл бұрын
@@seditt5146 Not likely. Nobody knows. Brain s truly bizarre and the connections between cells are actually differently wired. No ask yourself: how does a brain know it exist, how are feelings developed etc. Brain is still a mystery.
@hakimr79865 жыл бұрын
15:19 seems interessting, just like you have to train your own (biological) NN to draw a human face, although you saw millions of them
@TrendyContent5 жыл бұрын
Hakim R thats a very good analogy
@sohampatil65393 жыл бұрын
This might be relevant: look up general adversarial networks
@anjanit2006 Жыл бұрын
Suprised u pulled this off real well. I am 26 years old and working in google for a 1.3 crore job in IT. I am about to be a millionaire all because of u . Like seriously u are the most helpfull person in my life.
@Treegrower7 жыл бұрын
@ 3:39 Wow... I didn't realize 3B1B likes to bully neural networks. That was ruthless.
@Brian.0015 жыл бұрын
Yes, it's a jungle in there.
@oskarjung67385 жыл бұрын
@- RedBlazerFlame - ' Oversimplified' reference
@theshermantanker70434 жыл бұрын
There's a training method called Reinforcement Learning where you literally torture the Network when it gets the wrong output lol
@theflaminglionhotlionfox21403 жыл бұрын
Me in part 1: Ah I think I'm starting to understand this whole thing. Me in part 2: Nevermind...
@saicharansigiri29643 жыл бұрын
excatly
@osmanyasar96023 жыл бұрын
Once you learn more math it will be meaningful. I guess if you dont understand this video then something is missing in your calculus and/or linear algebra
@FivosTheophylactou4 ай бұрын
Rewatch it 3 times. I did
@karthikrajeshwaran19979 ай бұрын
this is outstanding. deserves a nobel prize for the clarity of explanation.
@Maffoo7 жыл бұрын
This series is fantastic and just the right level of being complex but understandable. Thanks!
@geregeorge15896 жыл бұрын
At the 16 minute mark, I got sucker punched. After having gone through this and the previous video on machine learning and just loving how an art student like myself is enjoying math such as this and feeling like I'm making some progress..... You tell me that this is all stuff that was figured out in the 80s and I'm like...... Oh Come On! Lol!
@apuapustaja20475 жыл бұрын
Honestly, the 80s is actually very recent compared to other stuff. In math undergrad I was learning concepts from the 1800s lmao
@SimberLayek5 жыл бұрын
@@apuapustaja2047 yup! Math is older than all of us... it's our discoveries that are "new"~
@DiegoGonzalez-vn3qx5 жыл бұрын
Honestly, don’t feel discouraged. General Relativity was formulated almost a century ago, but that doesn’t mean it is easier to understand.
@SimberLayek5 жыл бұрын
@Dark Aether some definitely could say that~
@DiegoGonzalez-vn3qx5 жыл бұрын
@Dark Aether What do you even mean by that? Right now, we are living in a moment in which scientific knowledge is being acquired at the fastest rate we have ever seen. The number of active scientists right now, as you might expect, is the largest in history. Now, if you are talking about "raw" intelligence... well, I'm pretty sure evolving into creatures with a noticeable higher intelligence is going to take a long, long, long time.
@somag681010 ай бұрын
"Our growth mindset is reflected when we think always if we can do better!" You are always awesome. Thanks for all the informative videos that imparts a lot of fundamental knowledge to people like me.
@HaouasLeDocteur7 жыл бұрын
WOO BEEN WAITING FOR THIS
@vivekd2967 жыл бұрын
i found your video on jacobians on khan academy at first i was like i don't know this new person he's not sal and then i read the comments and found out it was you !! it was a pleasant surprise
@sashimanu4 жыл бұрын
10:10 biological neurons are continuous-valued as well: their firing frequency varies.
@MOHANKUMARAPGPBatch4 жыл бұрын
still, the frequency cannot be decimal right ? so its still discrete input where calculus cannot be applied.....
@nullbeyondo3 жыл бұрын
@@MOHANKUMARAPGPBatch No. Calclus can always be applied and your idea of a frequency is horrible since it can easily be represented by many other methods like time or transforming it. And anyway, that's not how a biological machine works. The "decimals" in math serve no real purpose in reality cause everything in our universe is quantumized.
@MOHANKUMARAPGPBatch3 жыл бұрын
@@nullbeyondo still the time representation will not be continuous since the irrational values will not be included in the domain. I think you should read more about it. A lot more.
@MikhailFederov7 жыл бұрын
I wish I had these video when I was first learning. Damn you Tom Mitchell and your formal explanations.
@nahuelgareis89272 ай бұрын
I'm finishing my degree on Software Engineering, this is one of the last courses i'm taking, and it's crazy to think that throughout all this years i've always find myself back on this channel for explanations, honestly thank you so much. You may never see it nor care but i'll give you a shoutout on my graduation speech. Without you i would have never passed calculus, statistics, linear algebra, computer graphics nor discrete mathematics
@ThinkTwiceLtu7 жыл бұрын
great explanation, thank you:)
@UltraRik7 жыл бұрын
did you honestly understand any of this did this video honestly help you comprehend something
@chibrax547 жыл бұрын
+Patrik Banek it did help me ! But I was already familiar with these concepts. If you don't understand, watch the video again and look for different sources of explanation it will help you :)
@UltraRik7 жыл бұрын
Okay thanks for the advice
@chibrax547 жыл бұрын
+Patrik Banek You're welcome :) If you specifically don't get how the gradient can help reduce the error, you should learn what is the point of a derivative in a simple variable function and dig into multivariable calculus and optimization !
@emberdrops38926 жыл бұрын
3:41 Oh that poor little network... Say something good to it so it's happy again!
@changyuan54043 ай бұрын
professional, research related basic concepts and academic material. Really clearly explained
@VikasYadav13696 жыл бұрын
Draw a 5 for me. "I am a more of a multiple choice guy"
@AnshulGuptaAG7 жыл бұрын
That XKCD comic is how a lot of people consider neural networks to work :P Great video again, 3B1b! Edit: Waiting eagerly for your ConvNets and LSTMs :D
@somedude41226 жыл бұрын
And it certainly isn't wrong btw
@joshuaash342 жыл бұрын
I love this whole series, but your voice is so calming that when I put it on to listen to while I was at work, I almost fell asleep
@PV100085 жыл бұрын
This is the best educational channel on KZbin by a long mile.
@YaLTeRz7 жыл бұрын
Pretty sure at 11:03 the weights should either start at w1 or end at w13,001.
@3blue1brown7 жыл бұрын
Gah! Good catch.
@thesecondislander7 жыл бұрын
This just goes to show that off-by-one errors really do happen to the best ;)
@nikoerforderlich71087 жыл бұрын
+thesecondislander Well, next to cache invalidation and naming things it's one of the two big problems in computer science :P
@columbus8myhw7 жыл бұрын
It gained weight.
@williamwilliams10007 жыл бұрын
So my little pony, whats the appeal?
@-beee- Жыл бұрын
Keep coming back to this series and sharing it with so many people. This whole channel is truly a gift. Thank you so much for making these!
@Jessar167 жыл бұрын
Question: I have Synesthesia, so numbers have colors by association. I have a mathematic language that can represent massive numbers compacted, similar to the squishification formula you showed. For instance: 784 would be represented by a square green pixel, square purple pixel, and a square orange pixel (3s and 7s are Green, 3s are round and 7s are square, and Fours are an Orange square pixel, 8 is represented by a Purple round pixel) super hard to explain the language and math, so I'm sorry if I failed to communicate it properly. If you google "ChromaRythmatics" you will find a lot of my formulas and the bases for how it works, can also display time on a base12 clock in color using ChromaRythmatics within a colored Lemniscate with 2/3 loops, 2 for AM and 3 for PM. Once again sorry for my failures and weakness with the English language, if anyone can understand my English I can answer questions
@shans24087 жыл бұрын
Sounds interesting. I'd like to know more, please.
@Jessar167 жыл бұрын
Thank you! Well, again i'll apologize for my lacking English just in case I make errors. Zero is an Orange round pixel, One is an Orange line, 4 is an Orange square pixel. Two is a Purple round pixel, 8 is Purple and square. 3 is round and Green, 7 is Green and square. 5 is a Yellow square. 6 is a Blue square. 9 is a Red square. Using these new representations for the numbers 0-9, the handwriting errors which are the initial requirement for so many calculations should be nearly abolished - if not at least 1/3980th the calculations but my math could definitely be wrong. Also computations should speed up exponentially in comparison to just black and white, since it's reading so few pixels for each number. I don't know how to program but I think ChromaRhythmatics could be applied to a neural network to minimize errors and learning times. It could also be used in clocks and watches to represent time, within a colored lemniscate (infinity symbol) For example: "It is 3:29. In the first loop of the lemniscate, color a small circle Green. In the second loop put a Purple circle above a Red square. If it is 3:29PM extend another loop for the lemiscate to separate the Purple circle (2) and the Red square (9). This also can be used to teach math, having the AM lemniscate represent 3:29 (ratios) or the PM lemniscate representing 3/29 (Fractions) I typically use base12 math but then it's even harder to explain to people, dec el doz gro mo tri-mo etc being black and representing large groups of numbers rather than single digits. Example: 9,000,000,000,000 being represented by a black tri-mo on a Red square, so when it is read by a program those few pixels represent 9 trillion. 9 trillion in handwriting being read by a neural network has a lot of room for errors, whereas I feel ChromaRhythmatics provides solid new number symbols to work and play with. Sorry for rambling and being all over the place, I'm not great at explaining myself in English but it's a fun and quick way to do a lot of math. Or maybe just a potential tool for someone else much smarter than me to utilize
@abcdxx10595 жыл бұрын
@@Jessar16 but you will run out of colors seems like your thinking differently could be good or not mind explaining more
@suryanshvarshney111Ай бұрын
"No, bad computer! what you gave me is utter trash" - me whenever I'm programming a model
@terence0till8 ай бұрын
This is so so much better Information visualization than any of my teachers eher had! Plus your calm Voice and humour. I just Like it!
@acorn10147 жыл бұрын
13:33, I agree with the network on this one. That is a 4. No question.
@xera51967 жыл бұрын
A Corn Looks like 7 to me
@manioqqqq Жыл бұрын
@@xera5196 you mean the immideate in the timestamp. Mr Corn means the one 2 after it.
@renner123217 жыл бұрын
First of all, I love your videos (and podcast)! :) As always, you have great animations that really support the understanding! Secondly, I think your side note on biological neurons is not entirely correct. It is true, that if a neuron "fires", i.e. emits an action potential (AP), the amplitude of the electrical signal will always be the same (so basically no amplitude = 0, high amplitude = 1). However, by changing the frequency or firing rate of emitting these action potentials (basically the rate of emitting 1's and 0's), different stimulus intensities can be encoded (more frequent APs correspond to a higher intensity stimulus, less frequent APs correspond to a lower intensity stimulus). I would therefore argue that the "activation" of a biological neuron is also continuous and not necessarily binary. Of course this has nothing to do with the content of the actual video, which is a really good (and intuitive) explanation on Gradient Descent (way more intuitive than when I learned that in university :D).
@3blue1brown7 жыл бұрын
Well, I'm certainly no expert in this matter, so I'll defer to your judgement. But I always viewed the stimulus as analogous to the weighted sum for ANNs, not the activation. That the stimulation might vary continuously in biological networks, but the actual activation of the relevant neuron has basically two states. I suppose if you consider a high frequency firing to be similar to a higher intensity firing, in that sense it could have a more continuous activation, but that does feel a bit different in character. Either way, thanks for the input!
@Kowzorz7 жыл бұрын
I wonder if that strobey kinda firing is useful in networks that loop upon themselves, perhaps to help regulate path traversal rate since I understand that biological neurons learn every firing.
@NilesBlackX7 жыл бұрын
3Blue1Brown it's kind of like with an Arduino, where you can emulate an analog signal by switching a digital signal at high frequencies. Conversely, biological neurons also fire in an additional dimension - time - which has a non-trivial impact on the function of the network... Which after trying to compress into a KZbin comment, I realize is beyond the scope of the medium. But yeah, extended firing can even bring other neurons over their activation threshold even if a shorter pulse wouldn't, which means it has a direct effect on the function of the network.
@Widixmilez7 жыл бұрын
I'd go for renner's side of the discusion. In general, what's considered an "all or nothing" event is the onset of an action potential, but the activity of the neuron itself can be much more accurately described as proportional to its fire rate (the amount of action potentials it fires on a determined period of time, generally expresed in seconds) than to think whether it is active or inactive. What's more, the firing frecuency that the axon terminal develops directly determines how much neurotransmitter will be released, which, on the other hand, will determine the firing rate of the post-synaptic neuron. There's a lot more to this, but I'd like to keep it short, it can get messy real quick
@jacobsternig35806 жыл бұрын
In honest truth, Yago Pereyra, I would appreciate learning a little bit more about the complexity of it. Nice to see both the Articifial and Biological side, and I felt your were communicating your ideas very clearly.
@joshuakahky68913 жыл бұрын
*I had always just assumed that machine learning involved an initial guess, and then a variety of random nudges that produce either a better or worse result. Nudges that produced better results would stick around, and those that produced worse would be tossed out. I figured, like evolution, the right sets of nobs and dials would just arise from the randomness after enough guesses and minor changes. But having a systematic approach to improving your neural network as quickly as possible seems so obvious and so much better than a random change in all the variables, that it seems foolish that I ever thought of "randomness leading to rightness" as the correct way things were done. You do such an amazing job at conveying these complex (complex to a non-CS person, at least) ideas in an approachable and understandable way that I think you've done more for the average person's enjoyment of math than you could ever fully grasp. Thank you so much for caring as much as you do.*
@Nono-de3zi3 жыл бұрын
There is a lot of randomness though. First even for gradient-descent, the key is to generate random starting point. But of course the most important aspect is that gradient descent is the worse system to correctly get optimal values. 3B1B uses it because this is easy to explain. And fairly computationally cheap. See it as "first generation optimisation". However, to avoid local minimal, you need to get randomness back (stochastic method). And your guess was actually totally correct. A large amount of these methods imitate evolution, such as Genetic Algorithms, Evolutionary Programming, etc. And then you have really cool stuff like particle swarms, etc.
@personthehuman12774 жыл бұрын
7:55 i knew you were familiar! you were one of my favorite teachers at khan academy. i think they should include your videos in the math and physics.
@OrigamiCreeper5 жыл бұрын
I am so happy that vsauce recommended this channel because it is amazing!