How Deep Neural Networks Work

  Рет қаралды 1,507,547

Brandon Rohrer

Brandon Rohrer

7 жыл бұрын

Part of the End-to-End Machine Learning School Course 193, How Neural Networks Work at e2eml.school/193
Visit the blog:
brohrer.github.io/how_neural_...
Get the slides:
docs.google.com/presentation/...
Errata
3:40 - I presented a hyperbolic tangent function and labeled it a sigmoid. While it is S-shaped (the literal meaning of "sigmoid") the term is generally used as a synonym for the logistic function. The label is misleading. It should read "hyperbolic tangent".
7:10 - The two connections leading to the bottom most node in the most recently added layer are shown as black when they should be white. This is corrected in 10:10.

Пікірлер: 865
@flavialan4544
@flavialan4544 3 жыл бұрын
This should be recommended as the 1st video to watch when it comes to learn neural networks
@DR-bq4ph
@DR-bq4ph Жыл бұрын
Yes
@ckpioo
@ckpioo Ай бұрын
yes I agree but for simplicity sake he should have done a 0 to 1, 0 being black 1 being white and 0.5 being grey, because almost everyone follows that pattern, and for new learners its a bit harder for them to switch from thinking about -1 to 1 to 0 to 1
@danklabunde
@danklabunde 4 жыл бұрын
I've been struggling to wrap my head around this topic for a few days, now. You went through everything very slowly and thoroughly and I'm now ready to dive into more complex lessons on this. Thank you so much, Brandon!
@BrandonRohrer
@BrandonRohrer 2 жыл бұрын
I'm very happy to hear it :)
@biokult7828
@biokult7828 7 жыл бұрын
"Connections are weighted, MEANING".... Holy fuck.....after viewing numerous videos from youtube, online courses and google talks.... (often with comments below saying "thanks for the clear explanation")....This is the FIRST person i have EVER seen that has actually explained what the purpose of weights are....
@Tremor244
@Tremor244 6 жыл бұрын
I feel the same, even though I still can't completely understand how weighting works :/
@garretthart4883
@garretthart4883 6 жыл бұрын
Tremor244 I am by no means an expert but weighting is what makes the network "learn" to be correct. By changing the weights it changes the output of each neuron and eventually the output of the network. If you tune the weights enough you will eventually get an output that is what it is supposed to be. i hope this helps
@LuxSolari
@LuxSolari 6 жыл бұрын
I don't work with neural networks but with other types of machine learning. But weighting is more or less the same in all these fields of mathematics. You want a system that, provided with an input (an image, for instance), achieves its classification as the output. For instance you have a scenery (input) and you want to know if it's from vacations at the mountains or at the beach (a classification, ie. the output). So you pass the image trough a set of filters: (1) does the image have umbrellas? (2) does it have clouds? (3) is there a lot of blue? (4) is there a lot of brown?, etc. If the image passes a specific combination of filters, there is a greater probability that the image is of a specific type (for instance, if the image (1) have umbrellas, (3) is blueish and isn't (4) brownish, it's more likely to be from the BEACH). But how much more likely? That's when the WEIGHTING comes into play. Through machine learning we want to calculate some coefficients (weights) that state a sort of likelihood of an image to pass a filter, given its type (for instance, if it has umbrellas there's a probability of 0.9 out of 1 (90%) that it is from the beach and not from a mountain, but if there's a lot of blue maybe only 0.6 of those images are from the beach, and so the WEIGHT IS LIGHTER. That means that, if the image passes a filter of COLOR BLUE it is likely to be from a BEACH, but if it passes a filter of UMBRELLAS it is EVEN MORE LIKELY). Weights, then, are a parameter of RELEVANCE of each of the selected filters to achieve the correct classification. So we make the machine learn from LOTS (thousands, perhaps) of images that we KNOW are from the beach or the mountain. One image from the beach has umbrellas, so the classification through the filters was correct and then the WEIGHT for the umbrellas is increased. But if there is an image of the mountains with umbrellas and the program says it's from the beach, the weight goes down for the umbrellas. When we did this with a lot of images, the weights are FINE TUNED to classify correctly most of the time (if the filters are any good... if we chose wrong filters from the beginning, then there's a chance the dictionary won't get any better even fed with lots of images. That could also happen if the training images are biased: ie. if they don't represent the real set of images that we want to classify). I hope this works better for you!
@anselmoufc
@anselmoufc 6 жыл бұрын
If you have had a course on linear regression, you will recognize weights are equivalent to parameters. They are just "free variables" you adjust in order to match inputs with outputs. In one-dimensional linear regression, the parameters are the slope and offset of a line, you adjust them so that the distance between the line and your points (your training examples) is the least. Neural networks use the same idea as statistical regression. The main difference is that neural networks use a lot of weights (parameters), and for this reason you have to care about overfitting. This in general does not happen in linear regression, since the models are way more parsimonious (use only a few parameters). The use of a lot of weights is also the reason why neural networks are good general approximators, the large amount of weights give them high flexibility. They are like bazookas, while statistical regression is more like a small gun. The point is that most of the times you need only a small gun. However, people like to apply neural networks to problems where linear regression would do a good job since NN are "sexier".
@madsbjerg8186
@madsbjerg8186 6 жыл бұрын
+Esteban Lucas Solari I want to let you know that I love you for everything you just wrote.
@heyasmusic7553
@heyasmusic7553 9 ай бұрын
I watched your videos 3 years ago. It'salmost nostalgic. You may not see this. But you're one of the reasons I kept moving through with Machine Learning
@BrandonRohrer
@BrandonRohrer 9 ай бұрын
I legit cried a little bit. Thank you for this.
@mikewen8216
@mikewen8216 7 жыл бұрын
I've watched many videos and read many blogs and articles, you are literally the best explainer at making these intuitive to understand
@behrampatel3563
@behrampatel3563 7 жыл бұрын
I agree.Penny dropped for me today with this Video. Thank you so much Brandon
@a.yashwanth
@a.yashwanth 4 жыл бұрын
3blue1brown
@klaudialustig3259
@klaudialustig3259 7 жыл бұрын
I already knew how neural networks work, but next time someone asks me, I'll consider showing him or her this video! Your explanation is visualized really nicely.
@rickiehatchell8637
@rickiehatchell8637 4 жыл бұрын
Clean, concise, informative, astonishingly helpful, you have my deepest gratitude. I've never seen anyone explain backprop as well as you just did, great job!
@AnkitSharma-ir8ud
@AnkitSharma-ir8ud 5 жыл бұрын
Really great explanation Brandon. Also, I greatly appreciate that you share your slides as well and that too in raw (PPT) format. Great work.
@bestoonhussien2851
@bestoonhussien2851 6 жыл бұрын
I'm in love with the way you explain things! So professional yet simple and easy to follow. Keep it up!
@claireanderson5903
@claireanderson5903 4 жыл бұрын
Brilliant! I was involved 50 years ago in a very early AI project and was exposed to simple neural nets back then. Of course, having no need for neural nets, I forgot most of what I ever knew about them during the interval. And, wow, has the field expanded since then. You have given a very clear and accessible explanation of deep networks and their workings. Will happily subscribe and hope to find further edification on Reinforcement Learning from you. THANK YOU.
@sirnate9065
@sirnate9065 6 жыл бұрын
Who else paused the video at 15:10, went and did a semester of calculus, then came back and finished watching?
@muhammedsalih4846
@muhammedsalih4846 6 жыл бұрын
Nobody
@danielschwegler5220
@danielschwegler5220 5 жыл бұрын
:)
@danielschwegler5220
@danielschwegler5220 5 жыл бұрын
Muhammed Sahli's mother
@safesploit
@safesploit 5 жыл бұрын
SirNate I still remember most of my calculus and have notes from prior study 😜
@SreenikethanI
@SreenikethanI 5 жыл бұрын
lol
@alignedbyprinciple
@alignedbyprinciple 6 жыл бұрын
I have seen many many videos regarding NN but this is by far the best; Brandon understands the relationship between the NN and the backbone of the NN, which is the underlining math. He clearly presented them in a very intuitive way. Hats off for you sir. Keep up the good job.
@InsaneAssassin24
@InsaneAssassin24 6 жыл бұрын
As a chemist who just recently took Physical Chemistry, back propagation makes SOOO much more sense to me when you put it into a calculus description, rather than a qualitative one as I've been seeing elsewhere. So THANK YOU!
@jabrilsdev
@jabrilsdev 7 жыл бұрын
this is probably the best breakdown ive came across, very dense, you've left no spaces in between your explanations! Thanks for the great lesson! Onward to a calculus class!
@DeltaTrader
@DeltaTrader 7 жыл бұрын
Possibly one of the best explanations about NN out there... Congratulations!
@FlashKenTutorials
@FlashKenTutorials 7 жыл бұрын
Clean, concise, informative, astonishingly helpful, you have my deepest gratitude.
@BrandonRohrer
@BrandonRohrer 6 жыл бұрын
You are most welcome
@coolcasper3
@coolcasper3 7 жыл бұрын
This is the most intuitive explanation of neutral nets that I've seen, keep up the great content!
@fghj-zh6cv
@fghj-zh6cv 6 жыл бұрын
This simple lecture truly makes all viewers fully understand the logic behind neural networks. I strongly recommend this video clip to my colleagues participated in data driven industry. Thanks.
@mdellertson
@mdellertson 7 жыл бұрын
Yours was a very easy explanation of deep neural networks. Each step in the process was broken down into bite-sized chunks, making it very clear what's going on inside a deep neural network. Thanks so much!
@thehoxgenre
@thehoxgenre 4 жыл бұрын
i was amazed by the way you talk, and explain very slowly as well you remain slow until the end and you dont rush things. bravo
@WilsonMar1
@WilsonMar1 7 жыл бұрын
I've seen a lot of videos and this is the most clear explanation. Exceptional graphics too.
@yashsharma6112
@yashsharma6112 Ай бұрын
Very very rare way to explain a neural network in such a great depth. Loved the way you explained it ❤
@andrewschroeder4167
@andrewschroeder4167 6 жыл бұрын
I hate how many people try to explain complicated concepts that require math without using math. Because you used clear mathematical notation, you made this much easier to understand. Thank you so much.
@abhimanyusingh4281
@abhimanyusingh4281 7 жыл бұрын
I have been trying develop a DNN for a week. I have seen almost a 100 videos, forums, blogs. Of all those this is the only one with calculus that made complete sense to me. You sir are the real MVP
@Toonfish_
@Toonfish_ 7 жыл бұрын
I've never seen anyone explain backprop as well as you just did, great job!
@ViralKiller
@ViralKiller Жыл бұрын
I never understood backprop properly until this video...this was the light bulb
@Gunth0r
@Gunth0r 6 жыл бұрын
My kind of teacher! Subscribed! Nice voice, nice face, nice tempo, nice amount of information, nice visuals. You'd almost start to believe this video was produced with the concepts you've talked about. And my mind was just blown. I realized that we could make a lot more types of virtual neurons and in that way outclass our own brains (at even a fraction of the informational capacity) with a multitude of task-specific sub-brains forming a higher brain that may or may not develop personality.
@kademmohammed6836
@kademmohammed6836 7 жыл бұрын
by far the best video about ANN i've watched, thank you so much, really clear
@salmamohsen8208
@salmamohsen8208 4 жыл бұрын
Easiest most elaborate explanation I have found on that matter
@intros1854
@intros1854 6 жыл бұрын
Finally! You are the only one on the internet who explained this properly!
@radioactium
@radioactium 7 жыл бұрын
Wow, this is a very simple explanation, and it helped me understand the concept of neural networks. Thank you.
@cloudywithachanceofparticl2321
@cloudywithachanceofparticl2321 6 жыл бұрын
A physics guy coming into coding, this video completely clarified the topic. Your treatment of this topic is perfect!
4 жыл бұрын
Don't worry people I asked this guy if he was a physicist
@Mau365PP
@Mau365PP 3 жыл бұрын
@ thanks bro
@SunyangFu
@SunyangFu 6 жыл бұрын
The best and easily understandable neural net video I have seen
@OtRatsaphong
@OtRatsaphong 4 жыл бұрын
Thank you Brandon for taking the time to explain the logic behind neural networks. You have given me enough information to take the next steps towards building one of my own... and thank you KZbin algo for bringing this video to my attention.
@dixingxu
@dixingxu 7 жыл бұрын
Very detailed and clear explanation. Thank you for sharing! :)
@Jojooo64
@Jojooo64 6 жыл бұрын
Best video explaining neural networks i found so far. Thank you a lot!
@Uniquecapture
@Uniquecapture 5 жыл бұрын
Best explanation I’ve seen yet. Many thanks for posting.
@antwonmccadney5994
@antwonmccadney5994 5 жыл бұрын
Holy shit! Now I... I actually get it! Thank you! Clean, concise, informative, astonishingly helpful, you have my deepest gratitude.
@MatthewKleinsmith
@MatthewKleinsmith 7 жыл бұрын
Great video. Here are my notes: 7:54: The edges going into the bottom right node should be white instead of black. This small error repeats throughout the video. 10:47: You fixed the color error. 11:15: Man, this video feels good. 21:41: Man, this video feels really good. An extension for the interested: Sometimes we calculate the error of a network not by comparing its output to labels immediately, but by first putting its output through a function, and comparing that new output to something we consider to be the truth. That function could be another neural network. For example, in real-time style transfer (Johnson et al.), the network we train takes an image and transforms it into another image; we then take that generated image and analyze it with another neural network, comparing the new output with something we consider to be the truth. The point of the second neural network is to assess the error in the generated image in a deeper way than just calculating errors pixel by pixel with respect to an image we consider to be the truth. The authors of the real-time style transfer paper call this higher-level error "perceptual loss", as opposed to "per-pixel loss". I know this was outside the scope of this video, but it was helpful to me to write it, and I hope it will help someone who reads it.
@humanity3.090
@humanity3.090 7 жыл бұрын
Good to know that I'm not the only one who caught the logical mistakes. 9:14 Bottom second squash should be vertically inverted, if I'm not mistaken.
@ganondorfchampin
@ganondorfchampin 5 жыл бұрын
I had the idea of doing perceptual loss before I even knew the term for it, seems like it would work better for warp transforms and the like versus level transforms.
@hozelda
@hozelda 4 жыл бұрын
Alternatively, the edges are correct but the corresponding picture should be flipped. Regardless, the final step (output perceptron at the bottom indicating horizontal) works with either the white white edges or the black black edges scenario.
@oz459
@oz459 3 жыл бұрын
thanks :)
@sali-math-arts2769
@sali-math-arts2769 2 жыл бұрын
YES - thanks , I saw that tiny error too 🙂
@PierreThierryKPH
@PierreThierryKPH 6 жыл бұрын
Very slowly and clearly gets to the point, nice and accessible video on the subject.
@NewMediaServicesDe
@NewMediaServicesDe 4 жыл бұрын
30 years ago, I studied computer science. we were into pattern-recognition and stuff, and I was always interested in learning machines, but couldn't get the underlying principle. now, I got it. that was simply brilliant. thanks a lot.
@user-kr6dk7bq6b
@user-kr6dk7bq6b 4 жыл бұрын
It's the first time I get to understand how neural networks work. Thank you.
@centreswift3371
@centreswift3371 5 жыл бұрын
Thank you, this has been very helpful for my understanding of these networks for studying.
@Thejosiphas
@Thejosiphas 6 жыл бұрын
I like how much effort you put into making these ideas accessible
@khrilibrik
@khrilibrik 6 жыл бұрын
Thanks for the clarity of your explanation
@RandyFortier
@RandyFortier 7 жыл бұрын
This is very well explained. Great job, and thanks so much! subbed
@lucazarts25
@lucazarts25 6 жыл бұрын
OMG it's even harder then I expected! Thank you very much for the thorough and thoughtful explanation!
@lucazarts25
@lucazarts25 6 жыл бұрын
it goes without saying that I became a subscriber as well ;)
@cveja69
@cveja69 7 жыл бұрын
I almost never post comments, but this one deserve it :D Truly great :D
@bowbert23
@bowbert23 Жыл бұрын
I always had trouble intuitvely understanding how a derivate works and how practically its calculation is reflected in simple terms. Little did I know starting this video, that I'll finally understand it. Thank you! I'm relieved and feel less stupid now.
@BrandonRohrer
@BrandonRohrer Жыл бұрын
I'm really happy to hear that Bowbert. Thank you for the note.
@buffnuffin
@buffnuffin 7 жыл бұрын
thank you for sharing, Brandon ! Nicely explained
@Yoonoo
@Yoonoo 7 жыл бұрын
Great video! Definitely one of the best explanations I've seen for Deep Neural Networks.
@aseedb
@aseedb 6 жыл бұрын
Great explanation, thanks for sharing the slides!
@vipinsingh-dj2ty
@vipinsingh-dj2ty 6 жыл бұрын
literally THE best explanation i found on the internet.
@rohitupadhyay4665
@rohitupadhyay4665 5 жыл бұрын
came across your blog today, reading about indexing and slicing dataframes. Great content :)
@AlbertLeng
@AlbertLeng 4 жыл бұрын
Thanks Brandon for your great video which simplifies things and gives an amazing easy to follow learning experience.
@jacolansac
@jacolansac 5 жыл бұрын
The internet needed a video like this one. Thanks a lot!
@Ivan_1791
@Ivan_1791 4 жыл бұрын
Best explanation I have seen so far man. Congratulations!
@halitekmekcioglu7150
@halitekmekcioglu7150 4 жыл бұрын
Thanks for the smooth narration, I liked very much!
@AashishKumar1
@AashishKumar1 7 жыл бұрын
This is one of the best explanation of neural network I have seen
@shivamkeshri487
@shivamkeshri487 6 жыл бұрын
wow awesome i never find a video like this with the simple example and clarity of neural network and its a though topic to explain but you make it easy... thanks
@davidguaita
@davidguaita 6 жыл бұрын
You're the man at explaining these things. Thank you so much.
@SyedMehdiX
@SyedMehdiX 4 жыл бұрын
That was flat out the best video explaining neural networks. Thank you!
@adrienr4466
@adrienr4466 5 жыл бұрын
Wow, this is really good! It's great to have such complete and clear explantions
@MrEnkelmagnus
@MrEnkelmagnus 7 жыл бұрын
This one was great! It was exactly what i was looking for.
@Beudd
@Beudd 6 жыл бұрын
This video is crazy good. Truly, this is amazingly well explained from the beginning till the end. Wow, thanks a lot for such an excellent presentation.
@nakitumizajashi4047
@nakitumizajashi4047 6 жыл бұрын
Thanks for quick and simple explanation!
@greenmartha3587
@greenmartha3587 4 жыл бұрын
Very detailed and clear explanation. Thank you for sharing! :) I've never seen anyone explain backprop as well as you just did, great job! This is very well explained. Great job, and thanks so much! subbed
@user-rb5kw8cd4o
@user-rb5kw8cd4o 7 жыл бұрын
Detailed and easy to understand
@JeromeEtienne
@JeromeEtienne 6 жыл бұрын
Very clear description! without assumption of previous knowledge. Thanks i found it most helpful :)
@DanielRamBeats
@DanielRamBeats 6 жыл бұрын
One of the best explanations I've seen. Thanks!
@tobimayr
@tobimayr 6 жыл бұрын
Thank you for this clear and understandable tutorial!
@abubakar205
@abubakar205 4 жыл бұрын
one of the best teacher you cleared all my doubts for neural networks thanks sir let me click an ad for you
@shahidmahmood7252
@shahidmahmood7252 6 жыл бұрын
Superb!! The best explanation of DL that I have come across after completing the Andrew NG's Stanford ML course. I am a follower now.
@jd.8019
@jd.8019 6 жыл бұрын
Great explanation and thank you for your time and efforts. Grade A work!
@DanielMoleGuacamole
@DanielMoleGuacamole Жыл бұрын
Holy thank you!! ive watched like 50+ ich tutorials on neural networks but all of em explained things poorly or too fast. But you went through everything slowly and actually explained all the info clearly!!
@BrandonRohrer
@BrandonRohrer Жыл бұрын
Thank you so much! I'm happy to hear how helpful it was, and it means a lot that you would send me a note saying so.
@abhijeetbhowmik2264
@abhijeetbhowmik2264 6 жыл бұрын
The best Back Propagation explanation on you tube. Thank you sir.
@alfakannan
@alfakannan 2 жыл бұрын
You are a gifted teacher. Even I could understand.
@srinivasabugada2726
@srinivasabugada2726 5 жыл бұрын
You explained How Neural Networks in very simple and easy to understand manner. thanks for sharing!
@yassinelamarti4157
@yassinelamarti4157 4 жыл бұрын
the best explanation for Neural Networks ever !
@adiabat1166
@adiabat1166 4 жыл бұрын
briliant video, I've subed already. I have question though, is the last neuron of the third layer conection are right (7:40) if i understood it should be both "white" conections?
@snehotoshbanerjee1938
@snehotoshbanerjee1938 6 жыл бұрын
Your all videos on NN are excellent!
@antoinedorman
@antoinedorman 4 жыл бұрын
This is gold if your looking to learn neural networks!! Well done
@suryabhusal1527
@suryabhusal1527 5 жыл бұрын
Precise and clear. Just wow! Great explanation. If possible please add video on feature extraction.
@zacharyprime1
@zacharyprime1 6 жыл бұрын
Great video! I wish I had seen this before I spent hours trying to learn this.
@mukulbarai1441
@mukulbarai1441 3 жыл бұрын
I've watched many videos on KZbin but non of the videos explained the concepts as intuitively as you did. Thought I have to watch it again as I've failed to grasp some concepts, I am sure that it will be clear as I watch more.
@BrandonRohrer
@BrandonRohrer 3 жыл бұрын
Thanks Mukul!
@slayemin
@slayemin 7 жыл бұрын
This explanation of back propagation was exactly what I needed. This is very clear and I now have higher confidence in my ability to create my own ANN from scratch.
@mehranmemnai
@mehranmemnai 5 жыл бұрын
Same here. My vision is clear
@brendawilliams8062
@brendawilliams8062 Жыл бұрын
I just enjoy numbers. Anything to do with them is a fantastic thing.
@garretthart4883
@garretthart4883 6 жыл бұрын
This video is hands down the best intro to neural networks I've ever seen! fantastic job. And thank you for putting links to learn more and not just leaving us hanging. I think it also just solidifies that you know what your talking about. I look forward to more content from you. Good work!
@Vermilicious
@Vermilicious 7 жыл бұрын
Nice intro. Fairly easy to grasp the essence.
@papperme
@papperme 7 жыл бұрын
Well DONE, Thanks for sharing this so clearly. I want to learn more ....
@nilaier1430
@nilaier1430 Жыл бұрын
Watching this video on my 4 pixel screen phone. Really informative.
@Sascha8a
@Sascha8a 7 жыл бұрын
This is a really good video! For me as a complete beginner this really help me understand the basics of neural networks, thanks!
@AviPars
@AviPars 7 жыл бұрын
Artem Kovera lovely book , just downloaded. for the lazy people : amzn.to/2ntC9Zm
@d0o0b
@d0o0b 7 жыл бұрын
Just got smarter than yesterday. Thanks for sharing! :0)
@gowthamramesh2443
@gowthamramesh2443 7 жыл бұрын
Just what I wanted! Thank you for sharing this.
@hankil81
@hankil81 7 жыл бұрын
Great example with even greater explanation.
@TuanKhai298
@TuanKhai298 7 ай бұрын
Really great explanation Brandon, thank you so much !
@Spearced
@Spearced 6 жыл бұрын
Great video, thank you! I'm attempting to build a simple neural network for music composition in Max/MSP, and am literally starting from the ground up. Quick question: in the sigmoid function graph you show near the beginning, the output values are shown as falling between -1 and +1, whereas the formula you give later on ( 1/(1+e^(-a)) ) would give values of between 0 and +1, right? Is there a different form of sigmoid function you're using to get the values discussed in the first half of the video?
@jamescarter7577
@jamescarter7577 5 жыл бұрын
This was so unbelievably good! Thank you for doing this!
@sailujshakya
@sailujshakya 6 жыл бұрын
concise and helpful. Keep the videos coming.
@junepark1003
@junepark1003 Жыл бұрын
This is one of the best explanations I’ve come across. Thank you! And subscribed :)
@michaeljia9005
@michaeljia9005 7 жыл бұрын
Thank you so much for these videos, they are very clear and helpful : ).
@jonasls
@jonasls 7 жыл бұрын
One of the best videos out there
What do neural networks learn?
27:24
Brandon Rohrer
Рет қаралды 28 М.
How convolutional neural networks work, in depth
1:01:28
Brandon Rohrer
Рет қаралды 199 М.
Kitten has a slime in her diaper?! 🙀 #cat #kitten #cute
00:28
Она Постояла За Себя! ❤️
00:25
Глеб Рандалайнен
Рет қаралды 7 МЛН
Don't eat centipede 🪱😂
00:19
Nadir Sailov
Рет қаралды 19 МЛН
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Рет қаралды 950 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 139 М.
The Essential Main Ideas of Neural Networks
18:54
StatQuest with Josh Starmer
Рет қаралды 858 М.
How Convolutional Neural Networks work
26:14
Brandon Rohrer
Рет қаралды 954 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 191 М.
How Bayes Theorem works
25:09
Brandon Rohrer
Рет қаралды 531 М.
How AIs, like ChatGPT, Learn
8:55
CGP Grey
Рет қаралды 10 МЛН
MIT Introduction to Deep Learning (2023) | 6.S191
58:12
Alexander Amini
Рет қаралды 1,9 МЛН
Вы поможете украсть ваш iPhone
0:56
Romancev768
Рет қаралды 595 М.
Наушники Ой🤣
0:26
Listen_pods
Рет қаралды 495 М.
Распаковка айфона в воде😱 #shorts
0:25
Mevaza
Рет қаралды 1,4 МЛН
The power button can never be pressed!!
0:57
Maker Y
Рет қаралды 43 МЛН
Пленка или защитное стекло: что лучше?
0:52
Слава 100пудово!
Рет қаралды 1,8 МЛН