Please start a "really" in-depth series on Reinforcement Learning. Nobody has done it in a way it's easy to understand and I think it's an area where much has to be done yet and it's the closest thing we have to how we humans really learn.
@SirajRaval7 жыл бұрын
seriously considering this.
@bjornsundin58207 жыл бұрын
Alejandro Rodriguez his deep q-learning video is about a reinforcement learning algorithm. I had a hard time learning from a video going that fast though, i learned it from a bunch of different sites instead. (I agree)
@SirajRaval7 жыл бұрын
great feedback will go slower
@normanheckscher7 жыл бұрын
Björn Sundin KZbin has speed function and pause button.
@bjornsundin58207 жыл бұрын
Norman Heckscher yeah, but it's more about how much he explains different things. Sometimes he says just a few words about something important that i'd need explained a bit more clearly. Of course, it's different for different people and i'm not telling him to change his teaching style.
@ortinsuez20527 жыл бұрын
Keep up the good work Siraj.
@SirajRaval7 жыл бұрын
thanks Ntinda!
@MarkJay6 жыл бұрын
Great talk Siraj. I always get inspired after watching your video. Keep it up!
@grainfrizz7 жыл бұрын
16:22 get high on math, not on drugs
@CheapBurger7 жыл бұрын
hahahahhahahha
@GuillaumeVerdonA7 жыл бұрын
math is a helluva drug
@SirajRaval7 жыл бұрын
hahah always
@ortinsuez20527 жыл бұрын
lol. I agree.
@zinqtable10927 жыл бұрын
go even higher with drugs
@dishonmwabashfano36272 жыл бұрын
Everything is a function! Math is everywhere, math is all around us. Math is beautiful. Siraj, you are a genius bro.
@justsomerandomguy9337 жыл бұрын
Geoffrey Hinton demonstrated the use of generalized backpropagation algorithm for training multi-layer neural nets, but not invented it. The approach was developed by Henry J. Kelley and Arthur E. Bryson
@SirajRaval7 жыл бұрын
yes he just popularized it
@TheAizaz4207 жыл бұрын
This is one of the best videos regarding Research side of Deep Learning and A.I. Siraj you are awesome
@Majorityy7 жыл бұрын
Siraj, it's a pleasure listening to you. I like your energy, what you're saying is very clear, and it's very motivating to sense such interest and passion for what you're explaining in the video. Please keep the good vibes going, awesome work. Milan
@tonycatman7 жыл бұрын
Excellent video. I've been thinking about Hinton's comments over the last week too, and also in the context of learning from tiny data sets. I'm thinking that humans are actually rubbish at classification from small data sets, but we kid ourselves that we are good at it.
@alienkishorekumar7 жыл бұрын
I almost thought Jeffery Hinton was in this video.
@SirajRaval7 жыл бұрын
he is in spirit
@y0d47 жыл бұрын
i liked your video because of your passion which you show in ~16 min :)
@mihaitensor7 жыл бұрын
Backpropagation algorithm was invented in the 1960's. Hinton showed in 1986 that backpropagation can generate useful internal representations of incoming data in hidden layers of neural networks. He didn't invented backpropagation.
@apachaves7 жыл бұрын
Excelent video again, thank you Siraj. I definitely agree we should do more exploration, and I hope one day to contribute in that sense.
@hypersonicmonkeybrains34187 жыл бұрын
Also you have to consider Morphic Resonance. Morphic resonance, Rupert Sheldrake says, is "the idea of mysterious telepathy-type interconnections between organisms and of collective memories within species.
@douglasoak79647 жыл бұрын
Its not sparse. its focused. babies start with a pre-programmed classifier (faces). This classifier is faces. Positive/negative. Thats why they are so focused on faces. From this simply classifier, the baby move on to build its own classifiers. Essentially a general AI will be a classifier fed by another simply classifier with the ability to build off of the initial classifier to build new classifiers.
@sortof33376 жыл бұрын
What about babies who are born blind? So, how do they function and become conscious?
@zacharykeener19906 жыл бұрын
I believe this example can be expanded to a number of "pre-programmed" classifiers, e.g. those things that the senses respond to. So blind/deaf/touch as the senses and the physical world as the pre-programmed classifiers
@victorocampo52637 жыл бұрын
Notice me senpai!!!!
@SirajRaval7 жыл бұрын
hi Victor!!
@whiteF0x90917 жыл бұрын
Great presentation ! Thanks
@guitarheroprince1237 жыл бұрын
cmon siraj tomorrow is my computer architecture test and I had to study but u dropped an awesome video.
@guitarheroprince1237 жыл бұрын
well fkit cause internet > college.
@KatySei7 жыл бұрын
Amazing video siraj.
@RAJATTHEPAGAL6 жыл бұрын
Thanx . Thats all the inspiration i needed...... And as for backprop ya just like u i too started to think maybe we r overusing and more and more models are arching towards it. O.o and i m just learning now.
@y__h7 жыл бұрын
Man you should totally cover Google's TPU impact on Machine Learning-specific hardware. Heck, Nvidia recently started TPU-like open source ML accelerator called NVDLA.
@SirajRaval7 жыл бұрын
will consider
@RoulDukeGonzo7 жыл бұрын
Seen the early stuff from Hofstadter? The concept network and workspace ideas are really cool.
@larryteslaspacexboringlawr7397 жыл бұрын
thank you deep learning research video
@shirshanyaroy2877 жыл бұрын
I watched this video on 2x speed just like you suggested. I feel like a god.
@451shail7 жыл бұрын
Such an interesting video!
@squirrel27707 жыл бұрын
Awesome, appreciate your work Siraj, inspiring! I need to get friendlier with Khan Academy and get back into math...*shudders*. Would you be able to recommend things to cherry pick and learn for these purposes, or any order of things perhaps? Or would you consider most concepts to have dependencies that would make cherry picking questionable? For an example would you jump to trying to figure out the derivative and partial derivative?
@Leon-pn6rb7 жыл бұрын
yo I legit thought in the beginning that the bald guy behind him was actually a co host , standing behind him , waiting to speak next I looked at him for far too long to realize that he was just a photo in the article
@giraudl7 жыл бұрын
Really love how you explain Chain Rule. 10^∞ Kudos !
@SirajRaval7 жыл бұрын
thanks Luc!
@cupajoesir7 жыл бұрын
but is our acceleration in learning being outstripped by the acceleration of the accumulation of information thus making it impossible to ever grasp full understanding?
@cdrwolfe7 жыл бұрын
Great vid and interesting discussion points. For those interested in evolution and its application in the brain I always recommend 'Chrisantha Fernando' (now of google) and his past work on 'Darwinian Neurodynamics'
@Magenta15937 жыл бұрын
Really good video!
@ladjiab7 жыл бұрын
Hi Siraj I have a question, do you think it's safe to train our models on Google Machine Learning engine ? wouldn't that allow Google's AI to know everything about our products since it's kinda learning from our models.
@Fuckutube5474657 жыл бұрын
I had a feeling you would have this view on Hinton's comments. While back-propagation is extremely useful with today's processing capabilities, its real world applications weren't too great in '86. I hope that the success of the 'next backprop' won't be dependent on brute force capabilities, 30 years later. Thanks for bringing attention to this topic, looking forward to the next one!
@Visaals37 жыл бұрын
Yo the line in the derivative graph wasn't tangent to the original function. I doubt anyone here doesn't know what derivatives are, but just in case, that would probably be confusing.
@leajiaure6 жыл бұрын
Maybe in the short term we will use computers to augment our abilities (as we have always done with technology), but machines absolutely can and will be capable of creativity that far exceeds ours. There is no task that cannot be done better by an AI.
@Neptutron6 жыл бұрын
What about FPGA's? Why aren't people making a big deal out of them for deep learning hardware? They'd be the perfect fit; you could just update them via software! Imagining entire batches of neural networks in single clock cycles!
@whickked7 жыл бұрын
Really appreciate you breaking down Deep Learning and AI concepts as well as recommending blogs, books, and articles to check out. You're the man!
@aamir122a7 жыл бұрын
You are asking the right question brain do not back propagate and they do not require millions of example to run. You should really look at what Numenta and cortical IO is doing. Setting everything aside just they way they encode data is what you refer to sparse input. To get you started here is the link to one of the paper from Cortical IO on how natural language is encoded to sparse representation. www.dropbox.com/s/ikbe1ney8u5d35h/semantic-folding-theory-white-paper.pdf?dl=0
@alienkishorekumar7 жыл бұрын
Learning deeplearning from deeplearning.ai and soon I'll do the fast.ai course too.
@tunestar7 жыл бұрын
Cool, but none of them will you teach you RL which is what you should be doing according to this video.
@alienkishorekumar7 жыл бұрын
Alejandro Rodriguez I'm planning to take it step by step, going from first principles.
@xiobus7 жыл бұрын
there all good tools to learn. .. learn it all
@SirajRaval7 жыл бұрын
keep it up
@Vadatajs6666 жыл бұрын
this is way strategy i used for my first BUGS project ... fun its today great idea :D just spawn ton of mutants until cpu die and chose bestest. theere interesting ideas emerge how to peed up evolution
@ajaymishra72127 жыл бұрын
For more on Differentiable Neural Computers deepmind.com/blog/differentiable-neural-computers/ www.nature.com/nature/journal/v538/n7626/full/nature20101.html
@skipmonday64677 жыл бұрын
please can u help me with algorithm search engine use to classify complex queries by user? thanks in advance
@KellenChase7 жыл бұрын
Every single time I think to myself "Why isn't anyone talking about X or Y" in ML/AI/DL research you come out with a well done, entertaining, easily explained video summarizing and waxing poetic on the subjects giving me ever more rabbit holes to go down. You are awesome. thank you for doing what you do. Started a meetup in my city to discuss AI and many of your videos will be shared. Thank you for the Book reco as well. I will be listening to it on Audible at 3x. By the way, I just finished Max Tegmark's Life 3.0 and would highly recommend it if you haven't yet read it.
@jordanparker2117 жыл бұрын
Could back-propagation be a viewed as a CompSci implementation of an adaption to linear regression with a transformation applied? (1) The gradient decent optimization of the loss function analogous to a least squares estimator; (2) The weights equal to matrix B and the bias equal to matrix A in equation Y = A + BX + E, where E equals a matrix of error terms; (3) The application of the non-linearity being the transformation function in Z = g(Y) It seems like there is similar thought in linear regression and auto-regressive time-series modeling? Please tell me if i'm wrong! P.S. love the vidz brah
@Anonymous-lw1zy7 жыл бұрын
Hinton contributed a massive amount to NNs - but not backprop. During the 1980's upswing in NNs, there was a lot of discussion (and conflicting claims) of who first used backprop for NNs. The agreement finally pretty much concluded it was Werbos. See this for a good historical summary. people.idsia.ch/~juergen/who-invented-backpropagation.html
@realcygnus7 жыл бұрын
great channel !.......Raj, just out of pure curiosity do you have a "computer science" Degree ? or are you still a student ?......or are you just a natural self taught/DIY !?
@gangadharasai93727 жыл бұрын
Siraj you are Awesome !!!! and one qn ........... Is back propagation inspired from the neural activity of brain ...... does back propagation happen in our brain? i am not able to find an answer?
@natesh315886 жыл бұрын
Allow me to add one more option from a hardware perspective. To build on to that requires clearing up some fundamental issues. You say everything is a function. The more accurate statement is that everything can be described as a function. The key word being description. Neural network algorithms are computational descriptions on how learning can be achieved to satisfy an input-output mapping. The option I propose is trying to understand the underlying physical (thermodynamic) process that we end up describing as learning. For eg: a refrigerator taking in electrical energy to cool things down can be described computationally using an input-output function and implemented using a transistor circuit. I can also always build a refrigerator to take in power and cool things down. Both my circuit and the refrigerator are now doing the same thing computationally but only one of them will actually cool things down. So why not attack the question of general intelligence the same way? Is it possible to build a hardware system that satisfies specific thermodynamic (energy/power/work/heat) conditions so that their dynamics can now be described as learning. For fun, let's call this system a thermodynamic computer.
@santicomp7 жыл бұрын
Hi Siraj, I love your videos they are very inspiring, i hope i can finish software engineering and continue with ai applications. I have a question what do you think about a general ai that could think like a human, generate programs automatically, do you think a human(programmer) would be somehow excluded completely and lost it´s creative value. I understand making automatism for things that are monotonous or that a machine could do better, but if we get replaced by having a machine do everything were would we fit?. This question came from a debate which i had with a probability and statistics teacher that asked what was the future of ai, i responded talking of present ai, and his response was grim, like there was no hope in the future and we were contributing to this dispare of humanity. He also said few people would be in control of ai as in the feudal ages and we would all be like slaves. Of course i think its a bright future, i tried to respond positively, but he was hard to convince .I´m really intrigued by your opinion, maybe i can change his belief about the grim future he thinks is coming Keep it up Cheers from Uruguay South America.
@randcontrols7 жыл бұрын
Awesome video Siraj. What I like to see is more collaboration between the deep learning and complex adaptive system worlds. This video focusses on the deep learning world, so let me give a one sentence intro on complex adaptive systems. In a complex adaptive system, complexity arises, almost from nowhere, from simple interactions between "agents". Netlogo is a very simple modeling environment with simple examples demonstrating the concept. For example how the complex behavior of the flocking of birds can be simulated using very simple rules. On the other end of the scale is the very ambitious www.futurict2.eu project that aims to simulate the world to solve global socio-economic problems. Deep learning is really very exciting and I am truly amazed by its achievements. My 2 cents on the subject is that 20 percent of the 20 percent that you want to move from exploitation to exploration, should explore combining deep learning with complexity science.
@arzoo_singh7 жыл бұрын
Siraj Great work ,just a small feedback speak slowly and give sometime ...lets say you explaining about regression speak slowly and give a pause in between .
@chrismiles38385 жыл бұрын
Artificial Life is the closest field studying the topic that you concluded as the most promising direction. It's an exciting field! I would love to see more interactions between AI and AL. Siraj, this could possibly be an interesting topic for a video?
@milanpospisil80246 жыл бұрын
But unsupervised and supervised learning are sometimes tight together. For example prediction of future is classification on unlabeled data (you just want to predict next state of system using unlabeled sequence of states in time). And I think, thats what the brain does.
@salmanrahim35407 жыл бұрын
Cerebrum is a decentralized platform for crowdsourced machine learning that is implementing multi-agent systems (based on principles of artificial life). Check it out at cerebrum.world
@rahul.chandak7 жыл бұрын
Nice video, just started learning ML, can you make a video explaining how to decide the hidden layer,when to assign a base value and how to decide it, and also how to assign initial weights with some practical examples. If any link available already, that will help too :)
@Barnardrab7 жыл бұрын
I didn't know Steven Pinker wrote books. I think I remember seeing him in a few Big Think videos. I may be picking up How the Mind Works.
@unclemax87976 жыл бұрын
it's was mathematically proven that kohonen and kmeans are the same........mmmmmmm, ai reinvented the wheel! and it's the case with lots of ai tools the problem is, if you say ' i used ai and kohonen' you are supposed to be a god, if you say ' i used a kmeans' you are just told you are an old fashion moron times are tough, huh?
@ivy34207 жыл бұрын
yeah couldn't agree more. math is freaking beautiful. so is physics and biology and chemistry and computer science and engineering and artificial intelligence and deep learning.
@mashmoompathan20527 жыл бұрын
Hey siraj, i need to select a deep learning algorithm for image (image is basically of computer sceen, which has characters, objects of diff shapes etc ) which algorithm do u think will be most suitable in my case. Please reply!
@ajaymishra72127 жыл бұрын
It seems that functionalism is a form of behaviorism, get out from blanket of behaviorism, then one can do some cools stuffs instead of creating metaphors that fools one that it is like the real intelligence[ Metaphors - Learning, rewards, reinforcement]; Computer only do what they are told to, this is quite simple principle which dont let you believe that metaphors are truth.
@hypersonicmonkeybrains34187 жыл бұрын
You say its amazing how babies can learn from sparse unlabeled data. Isn't it possible that vast amounts of data is somehow transferred from thousands of generations of ancestors, genetically on a DNA level. Maybe its not unlabeled data, maybe baby's the brain has a big headstart.
@moejobe7 жыл бұрын
This might be your best video im terms of depth.
@sparax78707 жыл бұрын
hey siraj , loved your video amazing, its so fun can please you do a video on affective computing, i think if you want computers to mimic humans in terms of intelligence then affective computing is the way this is a nice publication from mit , you might like it affect.media.mit.edu/pdfs/95.picard.pdf
@javzav20267 жыл бұрын
what if we use that multidimensional algorithm to group together nueral net connections (layers) have them compute normaly. Except have the grouping be for the task's a particular weight models.
@javzav20267 жыл бұрын
next level ai?
@tthtlc6 жыл бұрын
You make go crazy again....emotionally motivating. Good for my neural network.
@getrasa17 жыл бұрын
Siraj, you were talking a lot about those functions and that they are everything in life, where can I find more about them and how they relate to neural networks?
@debarokz7 жыл бұрын
wow.... great talk !!! I wish I had enough money to become a patron.. makes me feel bad
@rgrimoldi7 жыл бұрын
are different "weights" and "layers" the same thing?
@rgrimoldi7 жыл бұрын
Or is it that each layer is a function that contains each different weight
@shreyashervatte54957 жыл бұрын
Hey Siraj! Thank you so much for making videos like this.. Your videos really inspire me to do more with my life... The amount of information that's out there.. So much to learn... You're influencing lives out here.. Keep up the good work!!
@SirajRaval7 жыл бұрын
awesome thanks!
@Stan_1443 жыл бұрын
Jeff Hawkins has a solution. His new book came out just yesterday ..
@ayushman_sr6 жыл бұрын
But why partial derivative wrt weights can i understand in simple logic.. please
@AinunNajib-ec2ainun7 жыл бұрын
WOW, just wow I lose faith in deep learning just before watching this video Actually there is more more things to optimize! Thanks for making this video
@SirajRaval7 жыл бұрын
so awesome!
@BFBCIE27 жыл бұрын
Where can I find the document he is using in the video? The info would be super helpful!
@danishahmed50686 жыл бұрын
The name of video is a bit misleading. I realized after i was 20 mins thru meaning i lost tat time :(
@I77AGIC7 жыл бұрын
i really enjoy videos like this. you can't really find these kind of discussions often
@eduardmart12377 жыл бұрын
please make a video about text(document) classification!)
@alzheimancer7 жыл бұрын
The best explanation of the Back-propagation I've ever seen
@voidnoire72567 жыл бұрын
The game shown in 40:25 reminds me of kzbin.info/www/bejne/a17LZpmsrZ6WeNU
@BocaoLegal7 жыл бұрын
Love you Siraj, you are doing a job that no one else do.
@nikhilsoni70377 жыл бұрын
Amazing videos. You are an inspiration. Are you coming to India soon?
@elektronik20007 жыл бұрын
Siraj you becomes much better at explanation !!
@tholienguitar7 жыл бұрын
Could a video about Hinton's capsules be coming?
@spetz9117 жыл бұрын
how DP is a Hinton's invention if he is not present in DP wikipedia article?
@inamothosan7 жыл бұрын
Learn something new today...thanks Siraj
@siarez7 жыл бұрын
This makes me wonder why you think HTMs are a joke. What's the reason for them not to be worthy of further research?
@shafeeza1367 жыл бұрын
Thank you for your videos. I am learning a lot from them :)
@JakeyCroft6 жыл бұрын
Babies and children die without supervision. They need their parents to tell them how something works before they can actually learn doing said task. Even as children they often need you to look at the pictures they drew and „rate“ them. They need feedback all the time so they can adjust. Same thing with talking. They say something, you correct them, they say it again and so on. So in my opinion, they are not able of doing unsupervised learning without seeing how others do it or getting feedback.
@JakeyCroft6 жыл бұрын
Also, a baby recognises (like a dog, too) if a voice is positive or negative. So they‘ll get feedback no matter what exactly their parents are saying
@asleepius7 жыл бұрын
Much love Siraj, you put alot of work into everything you do. If I where your parent and you came to visit, I would slightly lower my newspaper and pull my reading goggles to the tip of my nose and give you nod in unapologetic validation.
@SirajRaval7 жыл бұрын
hah thanks Jordan!
@harshtiku32407 жыл бұрын
16:25 Whenever I see a Siraj Video!
@SirajRaval7 жыл бұрын
lol
@harishshankam62686 жыл бұрын
No words for ur video.Excellent
@bosepukur7 жыл бұрын
hope you have a million subscribers :)
@justinfrancis58347 жыл бұрын
1,000,000,000,000,000,000 nine zeros in a billion right?
@MLDawn3 жыл бұрын
You know that he is not the only inventor of backprop right?
@fabfan127 жыл бұрын
Awesome video. I love the enthusiasm!
@solid84037 жыл бұрын
love is a function. Awesome stuff.
@bernardofn7 жыл бұрын
Thanks Siraj. Very insightful video. I watched GH's interview to ANg two weeks ago and that kept me wondering about the new directions! :-)
@SirajRaval7 жыл бұрын
thanks Bernardo!
@icyrich44567 жыл бұрын
thx for another portion of knowledge
@sgaseretto7 жыл бұрын
I'll really love to see videos of you talking more about this, like the other, more experimental learning algorithms out there. For example, more about: - Synthetic Gradients (that you already have mentioned in this video) - Feedback Alignment - Target Propagation - Equilibrium Propagation - and others
@Erilyth7 жыл бұрын
Great video Siraj! One small question though, in self organizing maps, could we just extend it to higher dimensions instead of just a 2D map, following the algorithm I don't see any issues extending it to higher dimensions.
@hanyuliangchina7 жыл бұрын
anyone know that how to build an OpenAI Gym training environment in the cloud? such as training mario, and download agent, and run this agent on myself computer ?