Gradient descent, how neural networks learn | DL2

  Рет қаралды 7,203,081

3Blue1Brown

3Blue1Brown

Күн бұрын

Пікірлер: 3 200
@3blue1brown
@3blue1brown 7 жыл бұрын
Part 3 will be on backpropagation. I had originally planned to include it here, but the more I wanted to dig into a proper walk-through for what it's really doing, the more deserving it became of its own video. Stay tuned!
@HeyItsSahilSoni
@HeyItsSahilSoni 7 жыл бұрын
Can you provide some link to the training set? I'm quite new and I'm trying to learn this "Hello World" of NN,
@pinguin1009
@pinguin1009 7 жыл бұрын
Did you consider to do a part about phase functioned neural networks? Would be awesome!
@nyroysa
@nyroysa 7 жыл бұрын
As the part progresses, we're getting closer to seeing that "lena.jpg" picture
@akhileshgangwar394
@akhileshgangwar394 7 жыл бұрын
You are doing very good job , lots of hard work behind this video , i salute your hard work thanks
@mynameisZhenyaArt_
@mynameisZhenyaArt_ 7 жыл бұрын
So have you decided to do more of these videos? There is a line with CNNs and LSTMs in video series...
@snookerbg
@snookerbg 7 жыл бұрын
One of youtube's highest quality content channels! Chapeau
@Dom-nn1kg
@Dom-nn1kg 7 жыл бұрын
Kosio Varbenov +
@yoavtamir7707
@yoavtamir7707 6 жыл бұрын
True!
@JD-jl4yy
@JD-jl4yy 6 жыл бұрын
'chapeau'
@achillesarmstrong9639
@achillesarmstrong9639 6 жыл бұрын
agree
@tubbysza
@tubbysza 6 жыл бұрын
Why you write "chapeau"? Hat in French. What does hat in that sentence do?????Be honest tho
@DubstepCherry
@DubstepCherry 4 жыл бұрын
I'm an IT student, and we have an Assignment on exactly this topic. We even have to use the MNIST data set. I have to say, this is absolutely lifesaving and I can not thank you enough Grant. What you do here is something that only a handful of people on this planet can do, explain and visualize rather complicated topics beautifully and simple. So from me and A LOT of students all around the globe, thank you so so much
@vsiegel
@vsiegel 3 жыл бұрын
Yes, it is just extremely good, in an objective way. He is brilliant at it, and spends a lot of time on each video. If there is an explanation of something by 3blue1brown, you will not find anything explaining it nearly as good.
@johndough510
@johndough510 2 жыл бұрын
@@vsiegel bro you guys are so much smarter than i am im jealous
@vsiegel
@vsiegel 2 жыл бұрын
@@johndough510 If you are thinking about how smart you are, you are probably smarter than you think. No worries.
@johndough510
@johndough510 2 жыл бұрын
@@vsiegel thanks for being so cool about it man, hope you have a good one
@alfredoaguilar2076
@alfredoaguilar2076 2 жыл бұрын
@@johndough510 I'm kill
@plekkchand
@plekkchand 4 жыл бұрын
Unlike most teachers of subjects like this, this gentleman seems to be genuinely concerned that his audience understands him, and he makes a concerted and highly successful effort to convey the ideas in a cogent, digestible and stimulating form.
@hamidbluri3135
@hamidbluri3135 2 жыл бұрын
TOTALLY agreed
@redflipper992
@redflipper992 Жыл бұрын
concerted with whom? I don't think you understand how to use that word.
@werwinn
@werwinn Жыл бұрын
he is a true proffesor!
@macchiato_1881
@macchiato_1881 Жыл бұрын
@@redflipper992 I don't think you understand what concerted means. Stop trying to act smart and think you're better than everyone here. Be humble. You are irrelevant in the big picture.
@JeffCaplan313
@JeffCaplan313 Жыл бұрын
​@@redflipper992I read that as "concerned"
@melkerper
@melkerper 5 жыл бұрын
Dissapointed you did not animate a 13000-dimensional graph. Would make things easier
@havewissmart9602
@havewissmart9602 5 жыл бұрын
No.... No it would not....
@mrwalter1049
@mrwalter1049 5 жыл бұрын
A 2-dimensional projection of a 13000-dimensional graph would probably look like a pile of garbage.
@cchulinn
@cchulinn 5 жыл бұрын
If 3Blue1Brown cannot animate a 13000-dimensional graph, then noone can.
@tehbonehead
@tehbonehead 4 жыл бұрын
@@mrwalter1049 You're not thinking fourth-dimensionally!!
@mrwalter1049
@mrwalter1049 4 жыл бұрын
@@tehbonehead No-one is. We're trapped in three dimensions. That's why you could never imagine what a 4-dimensional cube looks like. Making a 4-dimensional projection of a 13000-dimensional object isn't significantly better than 3 dimensions. If you meant to be humorous I hope someone gets a chuckle, because I didn't. Then your effort won't be in vain. Have a nice day 🙂
@Shrooblord
@Shrooblord 7 жыл бұрын
I'm only 12 minutes into this video right now, but I just wanted to say how much I appreciate the time and spacing you give to explaining a concept. You add pauses, you repeat things with slightly different wording, and you give examples and zoom in and out, linking to relevant thought processes that might help trigger an "a-ha" moment in the viewer. Many of these "hooks" actually make me understand concepts I've had trouble grasping in Maths, all because of your videos and the way you choose to explain things. So thanks! You're helping me a lot to become a smarter person. :)
@Dom-nn1kg
@Dom-nn1kg 7 жыл бұрын
Shrooblord +
@luke7503
@luke7503 6 жыл бұрын
yes.
@Appscaptain
@Appscaptain 6 жыл бұрын
Totally agree!
@atulct
@atulct 5 жыл бұрын
Absolutely agree
@CaerelsJan
@CaerelsJan 5 жыл бұрын
Couldn't agree more
@colonelmustard7078
@colonelmustard7078 Жыл бұрын
Not only the videos themselves are great on this channel but the lists of the supporting materials are amazing too! Drives me down a breathtaking rabbit hole every time! Thank you!
@bikkikumarsha
@bikkikumarsha 6 жыл бұрын
You are changing the world, shaping humanity. I wish you and your team, happy and peaceful life. This is a noble profession, god bless you guys.
@jasonzhang6534
@jasonzhang6534 9 ай бұрын
my professor has explained this in 3 lectures for about 6-7 hours. 3B1B explained it in 30 mins and it is much more clearer. I can now visualize and understand the what/why/how behind the basic deep learning algorithms. Really appreciate it!!!
@bradleyhill5493
@bradleyhill5493 7 ай бұрын
Same!
@Nyhilo
@Nyhilo 7 жыл бұрын
After watching your first video, I ended up drawing a "mock" neural network up on paper that would work on a 3x3 grid (after all what else are you supposed to do during a boring lecture class?). It was supposed to recognize boxes, x's, sevens, simple shapes, and I defined the 7 or so neurons that I thought it might need by hand. I did all the weighted sums and sigmoid functions on paper with calculator in hand. It took maybe an hour and a half to get everything straight but once I did, it worked. It guessed with fairly good accuracy that the little seven I "inputted" was a little seven. All that excitement because of your video. Later that evening and the next one, I tried to program the same function taking PNGs as inputs and definitions of the neurons and it honestly was only a little more rewarding. But now that I see what the hidden neurons *actually* look like, I only want to learn so much more. I expected the patterns to be messy, but I was really surprised to see that it really does almost look like just noise. Thank you for making these videos. I find myself suddenly motivated to go back to calculus class tomorrow and continue our less on gradients. There's just so much out there to learn and it's educators like you that are making it easier for curious individuals like me to get there.
@3blue1brown
@3blue1brown 7 жыл бұрын
That's so cool, thanks for sharing! I didn't expect anyone to actually go an play with it by hand, but simplifying down to a 3x3 grid seems really smart. Stay curious!
@Dom-nn1kg
@Dom-nn1kg 7 жыл бұрын
Nyhilo +
@Dom-nn1kg
@Dom-nn1kg 7 жыл бұрын
3Blue1Brown +
@jayeshsawant6734
@jayeshsawant6734 6 жыл бұрын
Were you able to do all of that by watching this video series alone? Please can you add other resources you referred? Thanks!
@michaelhesterberg702
@michaelhesterberg702 6 жыл бұрын
.....go anD play... AND AN+D
@kraneclaims
@kraneclaims 4 жыл бұрын
I just sat through a 3 day ML accelerator class and you series did a far better job at explaining them with 4 twenty minute videos. Well done mate. Really appreciate it. Thank you
@NoobJang
@NoobJang Жыл бұрын
this youtuber is the best in maths and engineering in general i have never been so astounded for how easy learning machine learning can be, without having to take in bunch of complex topics that doesnt add to the discussion. Like most of the courses try to make you understand various different complex topics and by the time you finished it, you will ahve forgotten mostly about machine learning. Why dont you just explain the catch for each concept then allow us learn it in depth afterwards like these channels only explaining the concepts with both ease of learning and depths are the best.
@dsmogor
@dsmogor 6 ай бұрын
I think what puts this material apart from the competition is the authors intuition of the focal points where the audience might loose the plot. Then he takes a patient and systematic turn to reiterate what have been learned so far to reinforce the basics to decrease the cognitive leap needed to grasp the next step. This ability is in my experience pretty unique.
@seC00kiel0rd
@seC00kiel0rd 7 жыл бұрын
My math career is over. Once I learned about gradient descent, it was all downhill from there.
@rlf4160
@rlf4160 7 жыл бұрын
I had a similar fate, except mine went negatively uphill.
@yepyep266
@yepyep266 6 жыл бұрын
just remember there are people in an even lower minima than you are.
@jomen112
@jomen112 6 жыл бұрын
Yea, but making random choices makes you eventually reach the bottom.
@thetinfoiltricorn7797
@thetinfoiltricorn7797 6 жыл бұрын
It's all planar vectors from here.
@nateschultz8973
@nateschultz8973 5 жыл бұрын
You just need to take a few steps back and turn your life around.
@hikaruyoroi
@hikaruyoroi 7 жыл бұрын
I love you so much. I'm taking multivariate calculus and I'm doing some neural network work right now, and none of my teachers have the passion nor the capability to teach as well as you. You help me keep my passion for learning alive
@ssa3101
@ssa3101 6 жыл бұрын
Incompetence galore.
@JockyJazz
@JockyJazz 3 жыл бұрын
3:38 you missed the chance of using the meme *"AI: I've found an output, but at what cost?"*
@hangilkim245
@hangilkim245 5 жыл бұрын
"But we can do better! Growth mindset!" at 5:18 .... a wholesome intellectual i love to see it
@souvikroy7570
@souvikroy7570 4 жыл бұрын
Hands down, I have never seen anyone explain mathematics so beautifully the way he does. Kudos!
@welcome2bangkok-d1x
@welcome2bangkok-d1x Жыл бұрын
i have no words to describe how thankful i am. thank you so much for such great content.
@Skydmig
@Skydmig 7 жыл бұрын
That end comment with Lisha Li really points out how important it is to put a lot of effort into gathering and creating good and structured data sets. I know it's cliché to state "garbage in, garbage out", but these findings put very precise context and weight to this particular issue.
@atlas7425
@atlas7425 7 жыл бұрын
Haha, "weight".....get it?
@theespatier4456
@theespatier4456 6 жыл бұрын
StiffWood True. This also becomes ethically important in medical applications of AI, where poor input can create racist AI and the like.
@cody._.--._.--.
@cody._.--._.--. 5 жыл бұрын
I wish someone would have introduced this to me at a young age back in the 90s. I had no idea neural network have existed for so long
@lopezb
@lopezb 4 жыл бұрын
Now it's easier to explain. He couldn't have made a video like this back then, both because KZbin didn't exist, and all the relevant stuff would be in technical papers...
@Djorgal
@Djorgal 4 жыл бұрын
@@lopezb Also, it was a really niche field that didn't show that much promise.
@damienivan8946
@damienivan8946 4 жыл бұрын
Also, from my understanding, modern neural networks are very different from the one in the 90s
@bubblelyte401
@bubblelyte401 4 жыл бұрын
It's a college graduate course.
@georgalem3310
@georgalem3310 4 жыл бұрын
In the 90s NN fell into disfavor.
@imad_uddin
@imad_uddin 3 жыл бұрын
Cant believe you explained this so easily. I thought it would take me ages to wrap my head around what neural networks basically are. This is truly amazing explanation!
@obsidianblade4228
@obsidianblade4228 6 жыл бұрын
Did anybody else feel bad for the network after he called the output utter trash?😢
@sarahmchugh4169
@sarahmchugh4169 5 жыл бұрын
I know, especially with those sad computer eyes. Tragic
@MarioRodriguez-or9fn
@MarioRodriguez-or9fn 4 жыл бұрын
Yes, specially when he called it bad computer :(
@DavidLee8981
@DavidLee8981 4 жыл бұрын
we are all utter trash for future robots
@theshermantanker7043
@theshermantanker7043 4 жыл бұрын
Bruh there's literally Reinforcement Learning where the Network is tortured by the researchers when it gets a wrong answer and the torture continues until it gets the right answer lol
@David5005ful
@David5005ful 4 жыл бұрын
Lmao.
@darshita1270
@darshita1270 3 жыл бұрын
Math courses in my college are basically trash compared to your videos , finally now I understand how math is being applied in computer science . Thank you so much for teaching in such an illustrative way .
@matteo7861
@matteo7861 23 күн бұрын
I’m a grad student in physics and i wanted to thank you. It is insane to find such good videos on such advanced subjects !
@andrasiani
@andrasiani 7 жыл бұрын
how can anyone dislike these videos? very detailed, accurate explanations and cool animations. Keep up the good work!!
@syedabdulsalam4659
@syedabdulsalam4659 5 жыл бұрын
stupids are everywhere.
@pseudo_goose
@pseudo_goose 4 жыл бұрын
Some thoughts on the results: 1. 14:01 The weights for the first layer _seem_ to be meaningless patterns when viewed individually, but combined, they do encode some kind of sophisticated pattern detection. That particular pattern detection isn't uniquely specified or constrained by this particular set of weights on the first layer; rather, there are infinitely many ways that the pattern detection scheme can be encoded in the weights of this single layer. These infinite other solutions can be thought of as the set of matrices that are row-equivalent to the 16x700ish matrix where each row is the set of weights for each of the neurons on this layer. You can rewrite each of the rows as a linear combination of the set of current rows, while possibly still preserving the behavior of the whole NN by performing a related transformation to the weights of the next layer. In this way, you can rewrite the patterns of the first layer to try and find an arrangement that tells you something about the reasoning. Row reduction in particular might produce interesting results! 2. 15:10 I think I understand the reason why your NN produces a confident result - it's because it was never trained to understand what a number _doesn't_ look like. All of the training data, from what I can tell, is numbers associated with 100% confident outputs. You'd want to train it on illegible handwriting, noise, whatever you expect to feed it later, with a result vector that can be interpreted as 0% confidence, by having small equal weights, having all weights to zero, or maybe an additional neuron that the NN uses to report "no number".
@watchm4ker
@watchm4ker 2 жыл бұрын
2 is a painfully easy mistake to make, because it requires the human assembling the programming data to think outside the box.
@josboersema1352
@josboersema1352 Жыл бұрын
Quite hard to read your comment, but it seems that we have the same idea: the neural network _is_ detecting smaller elements like "edges and loops" (as video author puts it), assuming those pictures 14:01 are of the actual results. The next layer then starts combining these elements, and it seems that if you stare at it long enough you can almost start guessing what it might be doing, like combining row 1 column 2 = strong + row 3 column 1 = strong + row 3 column 4 = strong + row 2 column 4 = weak + row 1 column 4 = weak, you might be going toward an 8 on those counts, and depending on some other combination of pattern strengths it might be a 6 or 9 if there is absence of signal upper/right or lower/left given by some of those patterns which are matched with the input. This is almost certainly not accurate as an example, but it seems to be the theme of how it works. 16:05 _"... picking up on edges and ... not at all what the network is doing."_ This statement in the video seems wrong. P.S. If this first part above is true, then the neural network might be capable of drawing a 5 (15:39). You just have to extract that answer in the way that it is in there, which is a bit more involved that following it's normal operation for which it is built. If you look into what combination of patterns from the first layer output, in what strengths, leads to a number (5 for example), than you could superimpose those patterns unto each other, and that would be what this neural network thinks is that number. It shouldn't be too hard to write a function to the already trained network, to draw this out.
@DavidMauas-j6t
@DavidMauas-j6t 14 күн бұрын
you know something? your channel is the best math teaching I have ever laid eyes upon. it is brilliant. beyond amazing. everything is meticulously choreographed to perfection. I WISH I could have learned math from someone like you when I was younger, and I am so happy I get to have the occasional brush with your videos. they are sublime.
@Shubhi021
@Shubhi021 Жыл бұрын
This video is truly a work of art. The animations are mesmerizing. Thank you for all your work, Grant!
@sweepy84
@sweepy84 5 жыл бұрын
You sir deserve a noble, or an oscar...what an incredibly effective method of teaching. thank you so very much!!! NO! BAD COMPUTER! made me crack up! lol
@shakhaoathossain5032
@shakhaoathossain5032 3 жыл бұрын
A balon di or too
@sohambhattacharjee951
@sohambhattacharjee951 2 жыл бұрын
@@shakhaoathossain5032 XD good one.
@Phoenix-nh9kt
@Phoenix-nh9kt 2 жыл бұрын
@@shakhaoathossain5032 add a grammy in there too hahaha
@ChemEDan
@ChemEDan 2 жыл бұрын
The fact it was recorded digitally meant he said that to a real computer.😭 AND SO DID YOU!!! 😠
@mashab9129
@mashab9129 4 ай бұрын
after looking though many udemy, orelilly and other youtube videos finally found this one - beginner friendly but on a profound enough level, explained in a comprehensible way, that does not lose you in the middle because it jumps from abc to hard concept - this channel is a gem. thank you!
@GAment_11
@GAment_11 6 жыл бұрын
When I watch your videos, all I want to do is keep going. Thanks for motivating me, as well as others, with your amazing content. I really appreciate it.
@bishalthapaliya4069
@bishalthapaliya4069 4 жыл бұрын
Probably, even a 5 year old would master deep learning when taught in this way. What a video man ! Awesomeeeeeeeee
@Heisenberg355
@Heisenberg355 2 жыл бұрын
This man is a living legend. I really sincerely believe he's one of the best "explainers" for many complex mathematical topics. I found your channel because of linear algebra, and now I'm relieved whenever I search for a topic and see one of your videos. You truly are the master of your league
@superj1e2z6
@superj1e2z6 7 жыл бұрын
Watching 3b1b Step 3b. Drop Everything Step 1b. Watch religiously.
@jonasvanderschaaf
@jonasvanderschaaf 7 жыл бұрын
oh the accuracy of this comment
@spiderforrest7816
@spiderforrest7816 7 жыл бұрын
My god I relate
@Petch85
@Petch85 7 жыл бұрын
for me it is. step 3b: make sure you are ready. you need to be 100% focused. step 1b: Watch it critically, be sure not to strengthen your miss believes. If it seems simpel and obvious I am probably misunderstanding it.
@fossilfighters101
@fossilfighters101 7 жыл бұрын
+
@Cosine_Wave
@Cosine_Wave 7 жыл бұрын
counting level: Parker
@tonraqkorr230
@tonraqkorr230 6 жыл бұрын
We need AI to recognise what the doctors write
@frankchen4229
@frankchen4229 3 жыл бұрын
whoever designs the algorithm and engineers the software deserves a nobel peace prize
@flyinglack
@flyinglack 3 жыл бұрын
@@frankchen4229 LOL
@johnbarbuto5387
@johnbarbuto5387 3 жыл бұрын
Who writes any more??? That horse left the barn a long time ago. Besides, we are no longer doctors. Courtesy of insurance companies we are "providers". (The same strategy of devalued identities has long been used by invading armies to anonymize those being conquered, an apropos metaphor.)
@MrWite1
@MrWite1 3 жыл бұрын
@@johnbarbuto5387 why so mad
@centerfield6339
@centerfield6339 2 жыл бұрын
@@johnbarbuto5387 not courtesy of insurance companies; courtesy of the fact that healthcare needs to be paid for. State systems are also payer systems.
@gersonribeirogoulart9895
@gersonribeirogoulart9895 Жыл бұрын
When something is amazing, it will look like with your work. Even your bg voice is totally understandable, legit and direct
@musthavechannel5262
@musthavechannel5262 6 жыл бұрын
"I'm more of a multiple choice guy" LOL
@Tri_3st
@Tri_3st 7 жыл бұрын
Hi 3B1B, as a technical physics student, beeing interested into this topic for quite a while now, and also enjoying your content for quit a while, i really wanna thank you for not only going into this topic particulary, but also for educating a relatively large audience with your informative videos and improving the interest into mathematical sciences for a lot of people including me, which is pretty important in my opinion! Keep it up!!
@S8EdgyVA
@S8EdgyVA 2 ай бұрын
I just adore the idea of making a function whose input is the multiple parameters of a function which takes a certain series, and the output of is a 3rd function that tells you how close the output of the first function was to a certain potential output It sounds complex but it’s actually both simple AND beautiful once you understand the concept
@olesyabondar4826
@olesyabondar4826 4 жыл бұрын
The graphics of this video is absolutely stunning! Thank you for your work ♡
@Iextrimator
@Iextrimator 7 жыл бұрын
Absolutely love your videos! I'm trying to show this video to my friends who doesn't know English so well, and I decided to make subtitles. Hope you approve them, I really want to spread word about your work.
@VincentKun
@VincentKun 2 жыл бұрын
I saw this video when i know nothing and i had a lots of intuitions, I'm rewatching after studied a lot more and I'm still learning a lot. You're a great teacher
@amagicpotato5511
@amagicpotato5511 7 жыл бұрын
Hi 3b1b i love your vids and they are one of the reasons why I know so much of how the universe works. Your channel inspires me to know more and you show the beauty of all of it. Please dont ever stop making these videos, you are making so many lives greater.
@3blue1brown
@3blue1brown 7 жыл бұрын
Thanks so much Amagic, I'll do my best.
@charliedexter3202
@charliedexter3202 7 жыл бұрын
Hello....I would like to learn how to make these animations....I teach math, particularly statistics at the graduate level, and I find the way you make numbers illustrate the idea actually helps understand the flow of parameters in question in a much better manner . Do give me certain leads so that I can pick this up. I shouldn't have problems programming once I know which platform to work with
@neerajtiwari5365
@neerajtiwari5365 7 жыл бұрын
@Amagic potato, For the world of me, I cannot possibly figure out how watching 3blue1brown's videos helped you attain that enlightenment about How The Universe Works...
@flumsenumse
@flumsenumse 7 жыл бұрын
+charlie dexter He has all of the code for his animations (written in python) here: github.com/3b1b/manim
@SPYTHandle
@SPYTHandle 5 жыл бұрын
How confident I feel in my current knowledge of neural networks: 15:41 - *"Uh...I'm really more of a multiple choice kinda guy."*
@natchu96
@natchu96 3 жыл бұрын
The neural networks themselves generally feel the same, so at least we won't be alone in that sentiment. Assuming thinking rocks and metal count as good companionship at any rate.
@mtrifiro
@mtrifiro 8 ай бұрын
I am so happy I discovered this today. I ignored all (well, most) of the math, and I still came away with a pretty solid understanding of how it works. Your explanations are ridiculously clear; you have a gift.
@skintaker1949
@skintaker1949 4 жыл бұрын
So uhhhh, did you just say that this was the "Hello World!" of neural networking.....
@GabrielCarvv
@GabrielCarvv 4 жыл бұрын
​@Winston Mcgee "p r e t t y m u c h i t"
@rudigerbrightheart7304
@rudigerbrightheart7304 4 жыл бұрын
Well, the data are the hello world, because it is the first image set that people take to test or learn about an algorithm.
@junkailiao
@junkailiao 4 жыл бұрын
Because you don't need all those knowledge to build a network that can read digits. It's easy with Keras even my grandmother can do it
@domizianostingi9504
@domizianostingi9504 4 жыл бұрын
Yes it is, because the dataset is veerry clean and CNN through Keras is very easy to implement, though you need to have huge background in math and code-writing (I'm a statistician so I have a little bit of both) :)
@namlehai2737
@namlehai2737 4 жыл бұрын
Use a package. People already did the hard stuff, you just have to call their function / use their models
@jabug_1144
@jabug_1144 5 жыл бұрын
Once I graduate and start working, I’m gonna send you the money I owe you for watching all these videos. I’m doing BSEE for control systems so hopefully it works out.
@johannes523
@johannes523 Ай бұрын
This is, like, literally the most important video on the internet.
@jamesluc007
@jamesluc007 7 жыл бұрын
You explained in less than 4 minutes something that took me several days to understand from other sources. You are awesome!
@bytenommer
@bytenommer 7 жыл бұрын
Could you please just drop everything else you are doing and do these videos full time for the rest of your life.
@iLoveTurtlesHaha
@iLoveTurtlesHaha 6 жыл бұрын
But his videos are a result of his other interests. XD
@raycharlestothebs
@raycharlestothebs 5 жыл бұрын
@@iLoveTurtlesHaha Just like you saying 'XD' is......
@thibauldnuyten2891
@thibauldnuyten2891 5 жыл бұрын
Sadly people still need to work to fricking live.
@sgracem2863
@sgracem2863 3 жыл бұрын
@@thibauldnuyten2891 Wouldn't he be rich off these videos though? I mean they all have millions of views and he almost has 4m subs
@AarshWankarIITGN
@AarshWankarIITGN 7 ай бұрын
Thanks a lot, at 2:45 AM in the morning, sitting peacefully in in the hostel of my institute, you actually cleared a lot of things up in the first two videos. This is the first time I understood to some extent what gradient descents and weights and cost functions were all about. Looking forward to continuing this journey of learning on your awesome channel 😃
@genoir-itsmusicart9169
@genoir-itsmusicart9169 6 жыл бұрын
This is mindblowingly interesting and extremely well explained. Thank you!
@blunderbus2695
@blunderbus2695 6 жыл бұрын
"It's actually just calculus." "Even worse!" i'm dead
@fitokay
@fitokay 5 жыл бұрын
Actually, AI just lie to people of the world
@fitokay
@fitokay 5 жыл бұрын
so far
@the.abhiram.r
@the.abhiram.r 3 жыл бұрын
calculus is the easiest form of math
@ahmedezat1353
@ahmedezat1353 3 жыл бұрын
@@the.abhiram.r I wish you are joking
@imtanuki4106
@imtanuki4106 10 ай бұрын
Possibly one of the best mini-courses on ML anywhere. Clearly explained concepts, beautiful post-production. kudos
@nourddinesofiir3525
@nourddinesofiir3525 7 жыл бұрын
Thanks for the part 2, I was waiting for it impatiently.
@danielamurphy8560
@danielamurphy8560 Жыл бұрын
I'm doing my Masters in applied Econ right now and we briefly went over Neural Networks in my advanced econometrics class. Some of the terminology was a bit different and I felt like I could understand it decently in office hours with my professor, but this was still a great resource to solidify my understanding of the concept. (Also we looked at the MNIST dataset in class too) :D
@eshanhembrom6633
@eshanhembrom6633 9 ай бұрын
As someone who asks why for every statement, I appreciate the way you explain the logic behind everything.
@jacoblund8289
@jacoblund8289 7 жыл бұрын
I love how he has people like Desmos and Markus Persson supporting him on patreon
@cgmiguel
@cgmiguel 5 жыл бұрын
Your videos with such wonderful LaTeX animations are just as high level as a BBC awarded documentaries. Very impressive to say the least.
@soliduscode
@soliduscode 2 жыл бұрын
I agree. I need to learn more about this LaTex animations
@rob651
@rob651 2 жыл бұрын
I've watched many videos and done some reading on how neural networks work (learn), but I couldn't find a satisfactory explanation until I watched this video. Your examples, analogies, visuals... were just perfect. Thank you so much.
@ajnelson1431
@ajnelson1431 7 жыл бұрын
3:40 "NO! Bad computer!"
@stydras3380
@stydras3380 7 жыл бұрын
AJ Nelson Bad boy!
@NF30
@NF30 7 жыл бұрын
I felt so sorry for the computer...
@johnchessant3012
@johnchessant3012 7 жыл бұрын
"To say that more mathematically..."
@Shockszzbyyous
@Shockszzbyyous 7 жыл бұрын
i heard eric cartman say it.
@Dom-nn1kg
@Dom-nn1kg 7 жыл бұрын
+
@siddheshmisale3904
@siddheshmisale3904 4 жыл бұрын
Would just take a moment here to appreciate the sheer brilliance of Grant on this series. I would not have reached a decent level of NN w/o these explanations and so would so many other people. Single best series on NN / Math out there in general.
@super266
@super266 2 жыл бұрын
You are the best science teacher I have every seen. If anyone upstairs is serious about our education system they should use your videos as baseline for how to teach properly; you never use a term that wasn't clearly defined prior, you use analogies perfectly, and you tie new technical info back to the original concept thereby making sense how the new info fits together in the larger picture. If my high-school and college teachers were like you I would have done infinitely better at school.
@shaylempert9994
@shaylempert9994 6 жыл бұрын
Pause and ponder?! Every 10 seconds I stop for a minute of thinking! And on all of your videos! This time I had a time I thought for like half an hour.
@minerawesome28
@minerawesome28 7 жыл бұрын
I was looking forward to this video all week.
@navidutube
@navidutube 5 ай бұрын
This is simply the best channel on KZbin
@shwetamayekar1863
@shwetamayekar1863 5 жыл бұрын
Love the eye/ pi animations! :) Gets me smiling amidst all the complexities of Neural Networks 😲
@rubyjohn
@rubyjohn 6 жыл бұрын
BEST VISUAL THERAPY IN MY LIFE
@WiredWizardsRealm-et5pp
@WiredWizardsRealm-et5pp 4 ай бұрын
Man , it feel so good to learn everything in zero shot now.. the neural networks , gradient decent , backpropagation . I used to get frustrated with lot of challenging concepts.. cuz I did not know maths , and AI terms.. but now after learning it for year it feels worth learning. Thanks to 3Blue guy.. whatever course he touched is worth all lectures combined i can't say. Its just pure core concept with animation. Quality at par
@nigeljohnson9820
@nigeljohnson9820 7 жыл бұрын
Humans have a habit of seeing images in random data, such as clouds, craters on the Moon or Mars or hearing voices in random radio static. Is this similar to identifying a 5 in a field of random data?
@5up3rp3rs0n
@5up3rp3rs0n 6 жыл бұрын
well for human you see things from the shape or outline that looks like a particular object, kinda like the "See a digit by the loops and lines it has" ideal for this system. So it's all notthe same as that of picking a number and being very confident about it from a static.
@seditt5146
@seditt5146 6 жыл бұрын
But is that what our brain is doing? Is it looking at a static, or are our neurons going .... ok, straight line... then round edge... another round edge..... hmmm that looks like the other 5s I seen... than triggering memory banks to look for other 5s. and again compare.
@jomen112
@jomen112 6 жыл бұрын
No. As explained in the video the network been (more) punished for providing multiple answers than single output wrong answers. That means a multiple answer does not exists as an option for the trained network, i.e. the set of output pattern it has been trains to respond with does not contain multiple choices. That is to say, the alternative answer "I dont know" or "maybe this or that" does not exists for the networks as an answer. Regarding clouds or craters, this is not "random data", the shapes we recognize are real and can be agreed upon to exists. This is not the case with noise, i.e. random data. Per definition random data does not not contains pattern and that is why noise carries no meaning to our brains. Regarding hearing voices in random static, I would suspect you only would hear voices if there is a pattern (signal) of some form which the brain pics up on and tries to make sense of. How prone you are to hear an actual voices might depend on how your brain be trained, i.e. biased, to detect voices (for instance if you believe one can communicate with ghosts you might be more prone to hear voices were others hears none). Because in the end, detecting meaning, i.e. label stuff, is all about being biased towards a certain interpretation of reality. So to conclude, the "reality" for the neural network in the video is biased, or limited, towards a singe neuron output and anything it "perceives" will get a response as such. However, human brains are a little bit more complex and biased differently, i.e. wired up in unique ways, which makes up for the diversity in believes and reasoning among people.
@parthshrivastava6325
@parthshrivastava6325 6 жыл бұрын
That falls under the imagination bracket,it's more like changing the value of the pixels instead of the weights or biases to get a desired output.
@ottrovgeisha2150
@ottrovgeisha2150 5 жыл бұрын
@@seditt5146 Not likely. Nobody knows. Brain s truly bizarre and the connections between cells are actually differently wired. No ask yourself: how does a brain know it exist, how are feelings developed etc. Brain is still a mystery.
@hakimr7986
@hakimr7986 5 жыл бұрын
15:19 seems interessting, just like you have to train your own (biological) NN to draw a human face, although you saw millions of them
@TrendyContent
@TrendyContent 5 жыл бұрын
Hakim R thats a very good analogy
@sohampatil6539
@sohampatil6539 3 жыл бұрын
This might be relevant: look up general adversarial networks
@anjanit2006
@anjanit2006 Жыл бұрын
Suprised u pulled this off real well. I am 26 years old and working in google for a 1.3 crore job in IT. I am about to be a millionaire all because of u . Like seriously u are the most helpfull person in my life.
@Treegrower
@Treegrower 7 жыл бұрын
@ 3:39 Wow... I didn't realize 3B1B likes to bully neural networks. That was ruthless.
@Brian.001
@Brian.001 5 жыл бұрын
Yes, it's a jungle in there.
@oskarjung6738
@oskarjung6738 5 жыл бұрын
@- RedBlazerFlame - ' Oversimplified' reference
@theshermantanker7043
@theshermantanker7043 4 жыл бұрын
There's a training method called Reinforcement Learning where you literally torture the Network when it gets the wrong output lol
@theflaminglionhotlionfox2140
@theflaminglionhotlionfox2140 3 жыл бұрын
Me in part 1: Ah I think I'm starting to understand this whole thing. Me in part 2: Nevermind...
@saicharansigiri2964
@saicharansigiri2964 3 жыл бұрын
excatly
@osmanyasar9602
@osmanyasar9602 3 жыл бұрын
Once you learn more math it will be meaningful. I guess if you dont understand this video then something is missing in your calculus and/or linear algebra
@FivosTheophylactou
@FivosTheophylactou 4 ай бұрын
Rewatch it 3 times. I did
@karthikrajeshwaran1997
@karthikrajeshwaran1997 9 ай бұрын
this is outstanding. deserves a nobel prize for the clarity of explanation.
@Maffoo
@Maffoo 7 жыл бұрын
This series is fantastic and just the right level of being complex but understandable. Thanks!
@geregeorge1589
@geregeorge1589 6 жыл бұрын
At the 16 minute mark, I got sucker punched. After having gone through this and the previous video on machine learning and just loving how an art student like myself is enjoying math such as this and feeling like I'm making some progress..... You tell me that this is all stuff that was figured out in the 80s and I'm like...... Oh Come On! Lol!
@apuapustaja2047
@apuapustaja2047 5 жыл бұрын
Honestly, the 80s is actually very recent compared to other stuff. In math undergrad I was learning concepts from the 1800s lmao
@SimberLayek
@SimberLayek 5 жыл бұрын
@@apuapustaja2047 yup! Math is older than all of us... it's our discoveries that are "new"~
@DiegoGonzalez-vn3qx
@DiegoGonzalez-vn3qx 5 жыл бұрын
Honestly, don’t feel discouraged. General Relativity was formulated almost a century ago, but that doesn’t mean it is easier to understand.
@SimberLayek
@SimberLayek 5 жыл бұрын
@Dark Aether some definitely could say that~
@DiegoGonzalez-vn3qx
@DiegoGonzalez-vn3qx 5 жыл бұрын
@Dark Aether What do you even mean by that? Right now, we are living in a moment in which scientific knowledge is being acquired at the fastest rate we have ever seen. The number of active scientists right now, as you might expect, is the largest in history. Now, if you are talking about "raw" intelligence... well, I'm pretty sure evolving into creatures with a noticeable higher intelligence is going to take a long, long, long time.
@somag6810
@somag6810 10 ай бұрын
"Our growth mindset is reflected when we think always if we can do better!" You are always awesome. Thanks for all the informative videos that imparts a lot of fundamental knowledge to people like me.
@HaouasLeDocteur
@HaouasLeDocteur 7 жыл бұрын
WOO BEEN WAITING FOR THIS
@vivekd296
@vivekd296 7 жыл бұрын
i found your video on jacobians on khan academy at first i was like i don't know this new person he's not sal and then i read the comments and found out it was you !! it was a pleasant surprise
@sashimanu
@sashimanu 4 жыл бұрын
10:10 biological neurons are continuous-valued as well: their firing frequency varies.
@MOHANKUMARAPGPBatch
@MOHANKUMARAPGPBatch 4 жыл бұрын
still, the frequency cannot be decimal right ? so its still discrete input where calculus cannot be applied.....
@nullbeyondo
@nullbeyondo 3 жыл бұрын
@@MOHANKUMARAPGPBatch No. Calclus can always be applied and your idea of a frequency is horrible since it can easily be represented by many other methods like time or transforming it. And anyway, that's not how a biological machine works. The "decimals" in math serve no real purpose in reality cause everything in our universe is quantumized.
@MOHANKUMARAPGPBatch
@MOHANKUMARAPGPBatch 3 жыл бұрын
@@nullbeyondo still the time representation will not be continuous since the irrational values will not be included in the domain. I think you should read more about it. A lot more.
@MikhailFederov
@MikhailFederov 7 жыл бұрын
I wish I had these video when I was first learning. Damn you Tom Mitchell and your formal explanations.
@nahuelgareis8927
@nahuelgareis8927 2 ай бұрын
I'm finishing my degree on Software Engineering, this is one of the last courses i'm taking, and it's crazy to think that throughout all this years i've always find myself back on this channel for explanations, honestly thank you so much. You may never see it nor care but i'll give you a shoutout on my graduation speech. Without you i would have never passed calculus, statistics, linear algebra, computer graphics nor discrete mathematics
@ThinkTwiceLtu
@ThinkTwiceLtu 7 жыл бұрын
great explanation, thank you:)
@UltraRik
@UltraRik 7 жыл бұрын
did you honestly understand any of this did this video honestly help you comprehend something
@chibrax54
@chibrax54 7 жыл бұрын
+Patrik Banek it did help me ! But I was already familiar with these concepts. If you don't understand, watch the video again and look for different sources of explanation it will help you :)
@UltraRik
@UltraRik 7 жыл бұрын
Okay thanks for the advice
@chibrax54
@chibrax54 7 жыл бұрын
+Patrik Banek You're welcome :) If you specifically don't get how the gradient can help reduce the error, you should learn what is the point of a derivative in a simple variable function and dig into multivariable calculus and optimization !
@emberdrops3892
@emberdrops3892 6 жыл бұрын
3:41 Oh that poor little network... Say something good to it so it's happy again!
@changyuan5404
@changyuan5404 3 ай бұрын
professional, research related basic concepts and academic material. Really clearly explained
@VikasYadav1369
@VikasYadav1369 6 жыл бұрын
Draw a 5 for me. "I am a more of a multiple choice guy"
@AnshulGuptaAG
@AnshulGuptaAG 7 жыл бұрын
That XKCD comic is how a lot of people consider neural networks to work :P Great video again, 3B1b! Edit: Waiting eagerly for your ConvNets and LSTMs :D
@somedude4122
@somedude4122 6 жыл бұрын
And it certainly isn't wrong btw
@joshuaash34
@joshuaash34 2 жыл бұрын
I love this whole series, but your voice is so calming that when I put it on to listen to while I was at work, I almost fell asleep
@PV10008
@PV10008 5 жыл бұрын
This is the best educational channel on KZbin by a long mile.
@YaLTeRz
@YaLTeRz 7 жыл бұрын
Pretty sure at 11:03 the weights should either start at w1 or end at w13,001.
@3blue1brown
@3blue1brown 7 жыл бұрын
Gah! Good catch.
@thesecondislander
@thesecondislander 7 жыл бұрын
This just goes to show that off-by-one errors really do happen to the best ;)
@nikoerforderlich7108
@nikoerforderlich7108 7 жыл бұрын
+thesecondislander Well, next to cache invalidation and naming things it's one of the two big problems in computer science :P
@columbus8myhw
@columbus8myhw 7 жыл бұрын
It gained weight.
@williamwilliams1000
@williamwilliams1000 7 жыл бұрын
So my little pony, whats the appeal?
@-beee-
@-beee- Жыл бұрын
Keep coming back to this series and sharing it with so many people. This whole channel is truly a gift. Thank you so much for making these!
@Jessar16
@Jessar16 7 жыл бұрын
Question: I have Synesthesia, so numbers have colors by association. I have a mathematic language that can represent massive numbers compacted, similar to the squishification formula you showed. For instance: 784 would be represented by a square green pixel, square purple pixel, and a square orange pixel (3s and 7s are Green, 3s are round and 7s are square, and Fours are an Orange square pixel, 8 is represented by a Purple round pixel) super hard to explain the language and math, so I'm sorry if I failed to communicate it properly. If you google "ChromaRythmatics" you will find a lot of my formulas and the bases for how it works, can also display time on a base12 clock in color using ChromaRythmatics within a colored Lemniscate with 2/3 loops, 2 for AM and 3 for PM. Once again sorry for my failures and weakness with the English language, if anyone can understand my English I can answer questions
@shans2408
@shans2408 7 жыл бұрын
Sounds interesting. I'd like to know more, please.
@Jessar16
@Jessar16 7 жыл бұрын
Thank you! Well, again i'll apologize for my lacking English just in case I make errors. Zero is an Orange round pixel, One is an Orange line, 4 is an Orange square pixel. Two is a Purple round pixel, 8 is Purple and square. 3 is round and Green, 7 is Green and square. 5 is a Yellow square. 6 is a Blue square. 9 is a Red square. Using these new representations for the numbers 0-9, the handwriting errors which are the initial requirement for so many calculations should be nearly abolished - if not at least 1/3980th the calculations but my math could definitely be wrong. Also computations should speed up exponentially in comparison to just black and white, since it's reading so few pixels for each number. I don't know how to program but I think ChromaRhythmatics could be applied to a neural network to minimize errors and learning times. It could also be used in clocks and watches to represent time, within a colored lemniscate (infinity symbol) For example: "It is 3:29. In the first loop of the lemniscate, color a small circle Green. In the second loop put a Purple circle above a Red square. If it is 3:29PM extend another loop for the lemiscate to separate the Purple circle (2) and the Red square (9). This also can be used to teach math, having the AM lemniscate represent 3:29 (ratios) or the PM lemniscate representing 3/29 (Fractions) I typically use base12 math but then it's even harder to explain to people, dec el doz gro mo tri-mo etc being black and representing large groups of numbers rather than single digits. Example: 9,000,000,000,000 being represented by a black tri-mo on a Red square, so when it is read by a program those few pixels represent 9 trillion. 9 trillion in handwriting being read by a neural network has a lot of room for errors, whereas I feel ChromaRhythmatics provides solid new number symbols to work and play with. Sorry for rambling and being all over the place, I'm not great at explaining myself in English but it's a fun and quick way to do a lot of math. Or maybe just a potential tool for someone else much smarter than me to utilize
@abcdxx1059
@abcdxx1059 5 жыл бұрын
@@Jessar16 but you will run out of colors seems like your thinking differently could be good or not mind explaining more
@suryanshvarshney111
@suryanshvarshney111 Ай бұрын
"No, bad computer! what you gave me is utter trash" - me whenever I'm programming a model
@terence0till
@terence0till 8 ай бұрын
This is so so much better Information visualization than any of my teachers eher had! Plus your calm Voice and humour. I just Like it!
@acorn1014
@acorn1014 7 жыл бұрын
13:33, I agree with the network on this one. That is a 4. No question.
@xera5196
@xera5196 7 жыл бұрын
A Corn Looks like 7 to me
@manioqqqq
@manioqqqq Жыл бұрын
@@xera5196 you mean the immideate in the timestamp. Mr Corn means the one 2 after it.
@renner12321
@renner12321 7 жыл бұрын
First of all, I love your videos (and podcast)! :) As always, you have great animations that really support the understanding! Secondly, I think your side note on biological neurons is not entirely correct. It is true, that if a neuron "fires", i.e. emits an action potential (AP), the amplitude of the electrical signal will always be the same (so basically no amplitude = 0, high amplitude = 1). However, by changing the frequency or firing rate of emitting these action potentials (basically the rate of emitting 1's and 0's), different stimulus intensities can be encoded (more frequent APs correspond to a higher intensity stimulus, less frequent APs correspond to a lower intensity stimulus). I would therefore argue that the "activation" of a biological neuron is also continuous and not necessarily binary. Of course this has nothing to do with the content of the actual video, which is a really good (and intuitive) explanation on Gradient Descent (way more intuitive than when I learned that in university :D).
@3blue1brown
@3blue1brown 7 жыл бұрын
Well, I'm certainly no expert in this matter, so I'll defer to your judgement. But I always viewed the stimulus as analogous to the weighted sum for ANNs, not the activation. That the stimulation might vary continuously in biological networks, but the actual activation of the relevant neuron has basically two states. I suppose if you consider a high frequency firing to be similar to a higher intensity firing, in that sense it could have a more continuous activation, but that does feel a bit different in character. Either way, thanks for the input!
@Kowzorz
@Kowzorz 7 жыл бұрын
I wonder if that strobey kinda firing is useful in networks that loop upon themselves, perhaps to help regulate path traversal rate since I understand that biological neurons learn every firing.
@NilesBlackX
@NilesBlackX 7 жыл бұрын
3Blue1Brown it's kind of like with an Arduino, where you can emulate an analog signal by switching a digital signal at high frequencies. Conversely, biological neurons also fire in an additional dimension - time - which has a non-trivial impact on the function of the network... Which after trying to compress into a KZbin comment, I realize is beyond the scope of the medium. But yeah, extended firing can even bring other neurons over their activation threshold even if a shorter pulse wouldn't, which means it has a direct effect on the function of the network.
@Widixmilez
@Widixmilez 7 жыл бұрын
I'd go for renner's side of the discusion. In general, what's considered an "all or nothing" event is the onset of an action potential, but the activity of the neuron itself can be much more accurately described as proportional to its fire rate (the amount of action potentials it fires on a determined period of time, generally expresed in seconds) than to think whether it is active or inactive. What's more, the firing frecuency that the axon terminal develops directly determines how much neurotransmitter will be released, which, on the other hand, will determine the firing rate of the post-synaptic neuron. There's a lot more to this, but I'd like to keep it short, it can get messy real quick
@jacobsternig3580
@jacobsternig3580 6 жыл бұрын
In honest truth, Yago Pereyra, I would appreciate learning a little bit more about the complexity of it. Nice to see both the Articifial and Biological side, and I felt your were communicating your ideas very clearly.
@joshuakahky6891
@joshuakahky6891 3 жыл бұрын
*I had always just assumed that machine learning involved an initial guess, and then a variety of random nudges that produce either a better or worse result. Nudges that produced better results would stick around, and those that produced worse would be tossed out. I figured, like evolution, the right sets of nobs and dials would just arise from the randomness after enough guesses and minor changes. But having a systematic approach to improving your neural network as quickly as possible seems so obvious and so much better than a random change in all the variables, that it seems foolish that I ever thought of "randomness leading to rightness" as the correct way things were done. You do such an amazing job at conveying these complex (complex to a non-CS person, at least) ideas in an approachable and understandable way that I think you've done more for the average person's enjoyment of math than you could ever fully grasp. Thank you so much for caring as much as you do.*
@Nono-de3zi
@Nono-de3zi 3 жыл бұрын
There is a lot of randomness though. First even for gradient-descent, the key is to generate random starting point. But of course the most important aspect is that gradient descent is the worse system to correctly get optimal values. 3B1B uses it because this is easy to explain. And fairly computationally cheap. See it as "first generation optimisation". However, to avoid local minimal, you need to get randomness back (stochastic method). And your guess was actually totally correct. A large amount of these methods imitate evolution, such as Genetic Algorithms, Evolutionary Programming, etc. And then you have really cool stuff like particle swarms, etc.
@personthehuman1277
@personthehuman1277 4 жыл бұрын
7:55 i knew you were familiar! you were one of my favorite teachers at khan academy. i think they should include your videos in the math and physics.
@OrigamiCreeper
@OrigamiCreeper 5 жыл бұрын
I am so happy that vsauce recommended this channel because it is amazing!
Backpropagation, step-by-step | DL3
12:47
3Blue1Brown
Рет қаралды 4,7 МЛН
But what is a neural network? | Deep learning chapter 1
18:40
3Blue1Brown
Рет қаралды 17 МЛН
They Chose Kindness Over Abuse in Their Team #shorts
00:20
I migliori trucchetti di Fabiosa
Рет қаралды 12 МЛН
1, 2, 3, 4, 5, 6, 7, 8, 9 🙈⚽️
00:46
Celine Dept
Рет қаралды 100 МЛН
Walking on LEGO Be Like... #shorts #mingweirocks
00:41
mingweirocks
Рет қаралды 7 МЛН
Why 4d geometry makes me sad
29:42
3Blue1Brown
Рет қаралды 836 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,2 МЛН
Let's build GPT: from scratch, in code, spelled out.
1:56:20
Andrej Karpathy
Рет қаралды 4,8 МЛН
How to Create a Neural Network (and Train it to Identify Doodles)
54:51
Sebastian Lague
Рет қаралды 1,9 МЛН
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 3,6 МЛН
The Strange Physics Principle That Shapes Reality
32:44
Veritasium
Рет қаралды 6 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 512 М.
A tale of two problem solvers (Average cube shadows)
40:06
3Blue1Brown
Рет қаралды 2,8 МЛН
They Chose Kindness Over Abuse in Their Team #shorts
00:20
I migliori trucchetti di Fabiosa
Рет қаралды 12 МЛН