How AI Learns (Backpropagation 101)

  Рет қаралды 42,775

Art of the Problem

Art of the Problem

4 жыл бұрын

Explore the fundamental process of backpropagation in artificial intelligence (AI). This video show how neural networks learn and improve by adapting to data during each training phase. Backpropagation is crucial in calculating errors and updating the network's weights to enhance decision-making within the AI system. This tutorial breaks down the core mechanics of neural network training, making it easier to understand for individuals interested in AI, machine learning, and network training. By understanding backpropagation, viewers can better grasp how neural networks evolve to process information more accurately. Keywords: rosenblatt, AI, Artificial Intelligence, Neural Networks, Backpropagation, Machine Learning, Network Training, Data Adaptation, Error Calculation, Performance Tuning, Decision Making.

Пікірлер: 131
@markheaney
@markheaney 4 жыл бұрын
This is easily the best channel on KZbin.
@TimBorny
@TimBorny 4 жыл бұрын
As always, worth the wait. You are a genius at distillation and visualization.
@ArtOfTheProblem
@ArtOfTheProblem 5 ай бұрын
just finished this series, please help me share it: kzbin.info/www/bejne/hXe2amNje71ppsk
@robertbohrer7501
@robertbohrer7501 4 жыл бұрын
This is the best explanation of neural networks I've seen by far, and I've seen most of them.
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
thrilled to hear this
@robosergTV
@robosergTV 3 жыл бұрын
3Blue1Braun is on par I'd say
@bardes18
@bardes18 4 жыл бұрын
This channel has more than lived up to its name! Turning technical explanations into beautiful art is perhaps the most elegant and pleasant way to abstract and teach complex subjects.
@ArtOfTheProblem
@ArtOfTheProblem 5 ай бұрын
just finished this series, please help me share it: kzbin.info/www/bejne/hXe2amNje71ppsk
@austinvw1988
@austinvw1988 15 күн бұрын
WOW!! This is the only video that I've watched that made me finally get it. The inclusion of the physical dimmer switch and weights in the neural net made me finally start to grasp this concept. Thank You! 👏
@ArtOfTheProblem
@ArtOfTheProblem 15 күн бұрын
So so happy to hear this, glad I took the time. I need to get this video out there more
@Aksahnsh
@Aksahnsh 4 жыл бұрын
I just don't understand, why this channel is not popular.
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
I know, i kinda stopped asking myself. I know it's due to algorithm changes in some way. because my videos don't even go to subscribers much at all
@Aksahnsh
@Aksahnsh 4 жыл бұрын
@@ArtOfTheProblem True, even I didn't got it in my recommendation feed. I just realized that why there is no new video from you from long time, have to open your channel to manually find it. Clicked on bell icon though now.
@ByteNishi
@ByteNishi 4 жыл бұрын
@@ArtOfTheProblem Please, don't get disheartened. I really love your videos and eagerly wait for new ones :)
@roygalaasen
@roygalaasen 2 жыл бұрын
There are no truer words like these. It baffles me as these videos are at least on the level of other highly popular science/math youtubers. It feels kind of unfair. Even the videos made 8+ years ago are pieces of masterful art. Did any of the other youtubers even exist back then? (I guess some did.)
@roygalaasen
@roygalaasen 2 жыл бұрын
@@ByteNishi I am praying for the same. I am happy for the once a year schedule. At least there is something. Edit: I know it is a bit of exaggeration. There is at least 4 videos per year, which seems close to what 3b1b does nowadays as well.
@TimBorny
@TimBorny 4 жыл бұрын
Seriously impressive. As someone currently applying to masters degrees in science communication, you are an inspiration. While a personal inquiry within a public forum is generally not advisable, I'm compelled to wonder if you'd be willing to be available for a brief conversation.
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
I appreciate hearing this. You can reach me britjcruise@gmail.com
@ArtOfTheProblem
@ArtOfTheProblem 5 ай бұрын
just finished this series, please help me share it: kzbin.info/www/bejne/hXe2amNje71ppsk
@idiosinkrazijske.rutine
@idiosinkrazijske.rutine 4 жыл бұрын
The highlight of this day
@baechlio
@baechlio 4 жыл бұрын
Yay!!! My favourite channel finally uploads again.. to be honest the quality of your videos makes the wait worth it
@yomanos
@yomanos 4 жыл бұрын
Brilliant video, as always. The part on the explanation of a deep neural network was really well explained.
@kriztoperurmeneta7089
@kriztoperurmeneta7089 4 жыл бұрын
This kind of content is a treasure.
@iMamoMC
@iMamoMC 4 жыл бұрын
This video was great! What an awesome introduction to deep learning ^^
@ssk081
@ssk081 3 жыл бұрын
Great explanation of why we use a smooth activation function
@ilovett
@ilovett 4 жыл бұрын
This could be a Netflix series. Bravo.
@raresmircea
@raresmircea 4 жыл бұрын
This, along with everything else on this channel, is fantastic material for schools. I hope it gets noticed by teachers
@KipColeman
@KipColeman 5 ай бұрын
College IT professor here... we are noticing! :)
@mehdia5176
@mehdia5176 4 жыл бұрын
Beautiful work coming from a beautiful biological neural network about the beauty of artificial neural networks.
@NoNTr1v1aL
@NoNTr1v1aL Жыл бұрын
Absolutely brilliant video!
@rj8875
@rj8875 4 жыл бұрын
After reading all day about tensorflow you just inspired me to go deeper on this subject. Thank you
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
woo hoo!
@KhaliliStudios
@KhaliliStudios 4 жыл бұрын
I’m always very impressed at the novel approach to teaching these subjects - another hit, Brit!
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
that you for your ongoing feedback. I worked super hard on this one,
@chris_1337
@chris_1337 4 жыл бұрын
Fantastic work!
@jayaganthan1
@jayaganthan1 Жыл бұрын
Just wow. Awesome explanation.
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
STAY TUNED: Next video will be on "History of RL | How AI Learned to Feel" SUBSCRIBE: www.youtube.com/@ArtOfTheProblem?sub_confirmation=1 WATCH AI series: kzbin.info/aero/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ
@TheFirstObserver
@TheFirstObserver 2 жыл бұрын
This is a well-done, visual representation of artificial neural networks and how they compare to biological ones. I will say, the only item I might add is that the real reason the "gradual" activation functions mentioned in the latter half of the video are so useful is because they are differentiable. The functions being differentiable are what truly allowed backpropagation to shine, as the chain rule is what allowed the error of a neuron to be determined by the error of the neurons following it, rather than calculating a neuron's error from the output directly each time.
@ccc3
@ccc3 3 жыл бұрын
Your videos are great at making someone more curious about a subject. They have the right balance of simplification and complexity.
@ArtOfTheProblem
@ArtOfTheProblem 3 жыл бұрын
appreciate the feedback that's what i'm looking to do with these videos. Stay tuned to the next in this series it took me a long while to write
@elektrisksitron9054
@elektrisksitron9054 4 жыл бұрын
Another amazing video!
@CYON4D
@CYON4D 4 жыл бұрын
Excellent video as always.
@zyugyzarc
@zyugyzarc Жыл бұрын
now that's a brilliant explanation of neural networks. better than anything Ive ever seen.
@ArtOfTheProblem
@ArtOfTheProblem Жыл бұрын
glad you found it
@poweruser64
@poweruser64 4 жыл бұрын
Wow. Thank you so much for this
@interspect_
@interspect_ 4 жыл бұрын
Great video as always!!
@midhunrajr372
@midhunrajr372 4 жыл бұрын
what a nice presentation..
@acidtears
@acidtears 4 жыл бұрын
Great video! Do you have any idea how these types of Neural Networks would respond to visual illusions? I'm writing my thesis about Neural Networks and biological plausibility and realized that there seems to be a disconnect between human perception and the processing of Neural Networks. Either way, incredibly informative.
@fungi42021
@fungi42021 3 жыл бұрын
always looking for new content to watch on this topic.. great channel
@ArtOfTheProblem
@ArtOfTheProblem 3 жыл бұрын
i'm so happy you found this series as it isn't ranking well yet. i have more videos coming out in this series soon
@ByteNishi
@ByteNishi 4 жыл бұрын
Love your videos, can you please post videos more often. Thanks, your videos are always worth the wait.
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
thank so much. I can't possibly post more often but what I can do is promise to continue for another 10 years :)
@mridhulml9238
@mridhulml9238 2 жыл бұрын
Wow this is really really great...you are really good at explaining
@DaveMakes
@DaveMakes 4 жыл бұрын
great work
@lalaliri
@lalaliri 3 жыл бұрын
amazing work! thank you
@ArtOfTheProblem
@ArtOfTheProblem 3 жыл бұрын
appreciate the feedback
@srabansinha3430
@srabansinha3430 4 жыл бұрын
As a Medical Student studying Neural anatomy and Physiology , this is a whole new perspective to me !!! Keep teaching us More !!You are the Best teacher :)
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
this means a lot, thanks for sharing
@sumitlahiri209
@sumitlahiri209 4 жыл бұрын
Amazing Video. It was really worth the wait. I have watched all your videos. Just awesome I would say. Best channel for inspiring computer science enthusiasts
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
that's really cool to hear you've watched them all. thanks for sharing
@sumitlahiri209
@sumitlahiri209 4 жыл бұрын
@@ArtOfTheProblem I watched all of them. They inspired me to take up computer science. I really love the video on Turing Machine. I share your videos in cirlces as well.
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
@@sumitlahiri209 you can offer my no higher compliment
@shawnbibby2934
@shawnbibby2934 4 ай бұрын
The term Distributed Representation when compared to musical notes makes it seem like it has its own image Resonance or Signature Frequency. As if we really are seeing or feeling the totality of the image of the firing neurons. We seem to be addicted to understanding perceptions from a Human Point of View, imagine if we could begin to find translations to see them from Animal Point of Views, Different Sensory Combination Point of Views and different combinations of Layered Senses. The potential is infinite. I like the addition of the Linear Learning Machine versus one that forgets and uses Feelings. It seems that by combining both memory styles you would have more unique potentialities in the flavor pot of experiences, especially when the two interact with each other. Not to mention the infinite different perspectives they would each carry while traveling through time. Small and large epochs of time. I seem to keep coming back to the Encryption / Decryption videos on how it requires complete Randomness to create strong encryption and how the babies babbling was seemingly random in nature, which begs the question, was it truly random or could we simply just not see the pattern from our limited perspective? What is the scales and size of the pattern? And what conceptions and perspectives need to merge to simply find the Key to interpreting it?
@ArtOfTheProblem
@ArtOfTheProblem 4 ай бұрын
yes, i'd say "feeling"
@yagomg7790
@yagomg7790 4 жыл бұрын
Best explanation on youtube. Keep it up
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
appreciate the feedback
@karolakkolo123
@karolakkolo123 4 жыл бұрын
Wow! The most amazing explanation on the internet probably. Will actual calculations be talked about in the series? (e.g. backpropagation calculus, etc) or will the series be mostly conceptual? (Any way I'm sure it will be interesting and of an unmatched quality)
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
great question and thank you. No more details on backprop calculations (there are lots of good videos of that) in order to focus on other key insights. stay tuned!
@username4441
@username4441 4 жыл бұрын
11:49 And the narration model took how long to-train?
@shawnbibby2934
@shawnbibby2934 4 ай бұрын
I would also love to see a video of all the terminologies used together and defined in a single video. Such as Bit, Node, Neuron, Layer, Weight, Deep learning, Entropy, Capacity, Memory etc. I am trying to write them down myself as a little glossary. There meanings are so much greater when they are grouped together.
@ArtOfTheProblem
@ArtOfTheProblem 4 ай бұрын
thank you! i was thinking of making a super edit of this series just need to scope it correctly...
@solsticeprojekt1937
@solsticeprojekt1937 8 ай бұрын
Hi! Three years late, but at 0:26 where you say "feelings", you describe something much better explained as "realizations". The answer to the "Why?" about this lies behind the saying "an image speaks a thousand words". The part that takes care of logic works in steps, sequentially and can analyse the "whole" of a realization, just like we can put feelings and ideas into words. This works both ways, of course, but the path from words to realizations is a much, much slower one.
@ArtOfTheProblem
@ArtOfTheProblem 5 ай бұрын
Took 2 years to finish this one, finally live would love your feedback: kzbin.info/www/bejne/hXe2amNje71ppsk
@thisaccountisdead9060
@thisaccountisdead9060 4 жыл бұрын
I'm not an expert or anything. But I had just been looking at networks. I was interested in the erdos formula: - erdos number = ln(population size) / ln(average number of friends per person) = degrees of separation for example it is thought there is something like 6 degrees of separation and an average of 30 friends each person among the global population. But I was also looking at Pareto distributions as well: - 1 - 1/Pareto index = ln(1 - P^n) / ln[1 - (1 - P)^n], where P relates to population of wealtheist and (1 - P) is the proportion of wealth they have.. for example if 20% of people have 80% of the wealth then P = 0.2 and (1 - P) = 0.8. n = 1 (but can be any number... if n = 3 it gives 1% of people with 50% wealth) and the Pareto Index would be 1.161. Whether it was a fluke I don't know? I did derive the formula as best I could rather than just guessing. But it seemed as though the following was true: - 1 - 1/Pareto Index = 1/Erdos Number Meaning that the Pareto Index = ln(population size) / [ln(populationn size) - ln(average number of friends per person)] Suggesting that the more friends people had on average then the lower the wealth inequality would be. Which I thought was a fascinating idea... ...But it also seemed as though the wealtheist actually had the most 'friends' or 'connections'. So the poorest would have the least connections while the wealthiest would have the most connections - in effect poor people would be channeling their attention toward the wealtheist. Like the top 1% would have an average of around 2,000 connections each (*and a few million dollars) while the poorest would have as little as 1 or 2 connections each (*with just a few thousand dollars... *based on a share of $300 Trillion). Maybe in like a neural network the most dominant parts of the brain could be the most connected parts? As I say I am not an expert. I was just messing around with it.
@Arifi070
@Arifi070 4 жыл бұрын
Great work! However, although the artificial neural network was inspired from the working of our brains, visualizing the network inside a head, can give a wrong idea that the human brain works that way. In fact, it is not like a feed forward neural network. [Just a side note]
@KalimbaRlz
@KalimbaRlz 3 жыл бұрын
excellent explained
@ArtOfTheProblem
@ArtOfTheProblem 3 жыл бұрын
thanks for feedback, have you watched the whole series?
@KalimbaRlz
@KalimbaRlz 3 жыл бұрын
@@ArtOfTheProblem yes I did!, thank you for all the information
@trainer1kali
@trainer1kali 4 жыл бұрын
a message to the one's responsible for the choices of the background music to translate the mood: "you're pretty good". P.S. In fact - you are AWESOME.
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
thank you, glad it's working
@zuhail1519
@zuhail1519 Жыл бұрын
I want to mention here, I watched the video halfway and I must say, I am a complete noob when it comes to biology but without making things complicated for a person like me, You made it so incredibly clear to me to appreciate how amazing our brain works and generalizes stuff, especially with your example of the short-story (can you please mention that author name, I quite couldn't catch it and cc are not clear either). Thank you for making this content, I'm grateful. Jazakallah hu khayr
@ArtOfTheProblem
@ArtOfTheProblem Жыл бұрын
thrilled to have you, i'm still working on the final video in this series so please stay tuned. Was it "Borges'?
@zuhail1519
@zuhail1519 Жыл бұрын
@@ArtOfTheProblem Already have my seatbelt fastened !
@KDOERAK
@KDOERAK Ай бұрын
Simply excellent 👍
@ArtOfTheProblem
@ArtOfTheProblem Ай бұрын
thanks, stay tuned for more!
@iamacoder8331
@iamacoder8331 Жыл бұрын
Very good content.
@ArtOfTheProblem
@ArtOfTheProblem Жыл бұрын
more on the way thanks
@slazy9219
@slazy9219 Жыл бұрын
holy shit this is some next level explanation thank you so much!
@ArtOfTheProblem
@ArtOfTheProblem Жыл бұрын
super glad you found it, still working on this series
@harryharpratap
@harryharpratap 4 жыл бұрын
Biologically speaking, what are the "weights" inside our brains? What physical part of the brain do they represent?
@emanuelmma2
@emanuelmma2 2 ай бұрын
Amazing Video.
@ArtOfTheProblem
@ArtOfTheProblem Ай бұрын
would love if you could help share my newest video: kzbin.info/www/bejne/a3bGgmR_mKqAfLM
@ahmadsalmankhan3200
@ahmadsalmankhan3200 5 ай бұрын
Amazing
@ArtOfTheProblem
@ArtOfTheProblem 5 ай бұрын
:))
@lakeguy65616
@lakeguy65616 2 жыл бұрын
so adding hidden layers allows a NN to solve more complex problems. How many layers is too many? You are limited by the speed of the computer training the NN. I assume too many layers allow the NN to "memorize" instead of generalizing. Any other limits on the number of hidden layers? What about the number of neurons/nodes per layer? Is there a relationship between the number of inputs and the number of neurons/nodes in the network? What about the relationship between the number of rows in your dataset versus the number of columns? As I understand it, the number of rows imposes a limit on the number of columns. Adding rows to your dataset allows you to expand the number of columns too. Do you agree or have a different understanding? OUTSTANDING VIDEOS! John D Deatherage
@ArtOfTheProblem
@ArtOfTheProblem 2 жыл бұрын
super great questions. I hope others can chime in. just wanted to add that in 'theory' you only need one hidden layer if it was really really wide to solve any problem (see universal approximation theorem), but in practice that doesn't work. and yes if the network is "too deep" it will be too difficult to train, so you need a sweet spot. when it comes to how wide those layers need to be, the most interesting research to me is how 'narrow' you can make them to 'force' the network to abstract (compress/generalize) the information in the middle. you can also make the hidden layers very wide which will cause it to 'memorize' instead of generalize. i didn't quite follow your column / row question though
@user-eh9jo9ep5r
@user-eh9jo9ep5r 3 ай бұрын
What input could be done to heal neurons to basic correct stages, for give correct outputs
@user-eh9jo9ep5r
@user-eh9jo9ep5r 3 ай бұрын
What if one layer behaviour is different from expected, and not recognised as correct, but other layers are give output from sorts geomerical inputs on the level sense impulse, what can be done to filter inputs and recieve correct outputs
@robosergTV
@robosergTV 3 жыл бұрын
Isn't universal approximation theorem the mathematical proof NN can solve and model any problem/function?
@ArtOfTheProblem
@ArtOfTheProblem 3 жыл бұрын
right but that is only in theory, in practice the number of neurons is impractical to make it "practically impossible" to impliment.
@robosergTV
@robosergTV 3 жыл бұрын
@@ArtOfTheProblem true, but at the end of the video, you were talking something about "we still don't have a mathematical proof of how NN works" or something like that.
@user-eh9jo9ep5r
@user-eh9jo9ep5r 3 ай бұрын
If network recieve input, give output, and answer isnt clear and recognised as not correct. Could it be recognised as network desieas , and if it so could be recognised as consequences influenced from other network or networks outputs
@abiakhil69
@abiakhil69 4 жыл бұрын
Consensus mechanism?
@columbus8myhw
@columbus8myhw 4 жыл бұрын
Link to the Hinton lecture?
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
kzbin.info/www/bejne/sJ2canyQq7xqqKc
@Trombonauta
@Trombonauta 3 ай бұрын
1:13 Cajal is pronounced more closely to /kah'al/ than to that, FYI.
@KittyBoom360
@KittyBoom360 4 жыл бұрын
This might be more of a tangent to your great video, but my understanding is that intuition and logic aren't really distinct things. The former is merely more hidden in deep webs of logic while the latter is the surface or what is most obvious and intuitive. Ah, see the paradox? It's a false dichotomy resulting from awkward folk terms and their common definitions. I was always like the teacher's pet in college courses of logic and symbolic reasoning while majoring in philosophy maybe partly because anything that was labeled "counter-intuitive" was just something I would never accept until I could make it intuitive for me via study and understanding. But putting me and my possible ego aside, look at the example of a great mathematician such as Ramanujan and how he described his own process of doing math while in dream-like states. His gift of logic was indeed his gift in intuition, or vice versa, depending on your definitions.
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
Yes I had a section in this video i cut which I kinda wish I left it. it was about how intuition is the foundation out of which logic grows. Kids won't learn with "words first" they learn with "sense first" - so mathematicians are of course guided by intuition and then they can later prove things with logic.
@abiakhil69
@abiakhil69 4 жыл бұрын
Sir any blockchain related videos in future?
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
have you seen my bitcoin video?
@abiakhil69
@abiakhil69 4 жыл бұрын
@@ArtOfTheProblem Yes sir. One of the best video YT.
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
@@abiakhil69 I do plan a follow up video, starting with ETH
@abiakhil69
@abiakhil69 4 жыл бұрын
@@ArtOfTheProblem great sir. Another best video coming . Waiting👍.
@bicates
@bicates Жыл бұрын
Eureka!
@user-eh9jo9ep5r
@user-eh9jo9ep5r 3 ай бұрын
If sensory order was destroyed or noised or anything like this, something like network trafficking , what need to do for to safe all neural network
@arty4679
@arty4679 4 жыл бұрын
Anyone knows the name of the Borjes story?
@ArtOfTheProblem
@ArtOfTheProblem 4 жыл бұрын
worth reading: Funes the Memorious
@Libertariun
@Libertariun 5 ай бұрын
14:45 ... can learn to configure THEIR connections ...
@fredsmith4134
@fredsmith4134 5 ай бұрын
it's all a chain of cause and effect from start to finish, each level or layer sharpens and zero's in on the exact match and is refined until a result is locked in, the human brain compares past results to incoming stimuli, and the result is also linked by chains of associations with the result, like result it's a dog, associations : dogs are fury, playful, dangerous, have a master, wage there tail when happy, and so on, but associations are unique to each separate mind ????
@AceHardy
@AceHardy 4 жыл бұрын
👑
@kmachine5110
@kmachine5110 4 жыл бұрын
KZbin is a mind reader.
@fxtech-art8242
@fxtech-art8242 Жыл бұрын
gpt4
@ArtOfTheProblem
@ArtOfTheProblem Жыл бұрын
a lot of progress since this video :)
@EvenStar303
@EvenStar303 5 ай бұрын
After 35 seconds, you are already wrong. We do not think in sentences!!! Thinking is wordless. However, we are translating our thinking into words. But this is not necessary. The point being, is that languageing is only necessary if we want to communicate to another person. But thinking comes first, NOT as a result of sentences. If you get good at meditation and centering yourself, you can drive your car without verbalizing what you are doing. You can make decisions and act them out without verbalization, internally or externally! Language is only a descriptor, not the thinking faculty itself!!!
@ArtOfTheProblem
@ArtOfTheProblem 5 ай бұрын
check out the whole series as I built up to this, i agree with you!
@escapefelicity2913
@escapefelicity2913 4 жыл бұрын
Get rid of the background noise
@escapefelicity2913
@escapefelicity2913 4 жыл бұрын
For anything expository, any background sound is unhelpful.
@vj.joseph
@vj.joseph 5 ай бұрын
you are wrong within the first 40 seconds.
@ArtOfTheProblem
@ArtOfTheProblem 5 ай бұрын
say more!
@joe_cock
@joe_cock 3 ай бұрын
good sound design
How AI Learns Concepts
14:22
Art of the Problem
Рет қаралды 168 М.
From Bacteria to Humans (Evolution of Learning)
16:58
Art of the Problem
Рет қаралды 40 М.
Buy Feastables, Win Unlimited Money
00:51
MrBeast 2
Рет қаралды 88 МЛН
Кәріс тіріма өзі ?  | Synyptas 3 | 8 серия
24:47
kak budto
Рет қаралды 1,6 МЛН
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 129 М.
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 187 М.
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Рет қаралды 948 М.
How Intelligence Evolved | A 600 Million Year Story
15:22
Art of the Problem
Рет қаралды 189 М.
What is Logic?
8:14
Art of the Problem
Рет қаралды 51 М.
Buy Feastables, Win Unlimited Money
00:51
MrBeast 2
Рет қаралды 88 МЛН