The Future of Deep Learning Research

  Рет қаралды 63,901

Siraj Raval

Siraj Raval

Күн бұрын

Пікірлер: 230
@tunestar
@tunestar 7 жыл бұрын
Please start a "really" in-depth series on Reinforcement Learning. Nobody has done it in a way it's easy to understand and I think it's an area where much has to be done yet and it's the closest thing we have to how we humans really learn.
@SirajRaval
@SirajRaval 7 жыл бұрын
seriously considering this.
@bjornsundin5820
@bjornsundin5820 7 жыл бұрын
Alejandro Rodriguez his deep q-learning video is about a reinforcement learning algorithm. I had a hard time learning from a video going that fast though, i learned it from a bunch of different sites instead. (I agree)
@SirajRaval
@SirajRaval 7 жыл бұрын
great feedback will go slower
@normanheckscher
@normanheckscher 7 жыл бұрын
Björn Sundin KZbin has speed function and pause button.
@bjornsundin5820
@bjornsundin5820 7 жыл бұрын
Norman Heckscher yeah, but it's more about how much he explains different things. Sometimes he says just a few words about something important that i'd need explained a bit more clearly. Of course, it's different for different people and i'm not telling him to change his teaching style.
@ortinsuez2052
@ortinsuez2052 7 жыл бұрын
Keep up the good work Siraj.
@SirajRaval
@SirajRaval 7 жыл бұрын
thanks Ntinda!
@MarkJay
@MarkJay 6 жыл бұрын
Great talk Siraj. I always get inspired after watching your video. Keep it up!
@grainfrizz
@grainfrizz 7 жыл бұрын
16:22 get high on math, not on drugs
@CheapBurger
@CheapBurger 7 жыл бұрын
hahahahhahahha
@GuillaumeVerdonA
@GuillaumeVerdonA 7 жыл бұрын
math is a helluva drug
@SirajRaval
@SirajRaval 7 жыл бұрын
hahah always
@ortinsuez2052
@ortinsuez2052 7 жыл бұрын
lol. I agree.
@zinqtable1092
@zinqtable1092 7 жыл бұрын
go even higher with drugs
@dishonmwabashfano3627
@dishonmwabashfano3627 2 жыл бұрын
Everything is a function! Math is everywhere, math is all around us. Math is beautiful. Siraj, you are a genius bro.
@justsomerandomguy933
@justsomerandomguy933 7 жыл бұрын
Geoffrey Hinton demonstrated the use of generalized backpropagation algorithm for training multi-layer neural nets, but not invented it. The approach was developed by Henry J. Kelley and Arthur E. Bryson
@SirajRaval
@SirajRaval 7 жыл бұрын
yes he just popularized it
@TheAizaz420
@TheAizaz420 7 жыл бұрын
This is one of the best videos regarding Research side of Deep Learning and A.I. Siraj you are awesome
@Majorityy
@Majorityy 7 жыл бұрын
Siraj, it's a pleasure listening to you. I like your energy, what you're saying is very clear, and it's very motivating to sense such interest and passion for what you're explaining in the video. Please keep the good vibes going, awesome work. Milan
@tonycatman
@tonycatman 7 жыл бұрын
Excellent video. I've been thinking about Hinton's comments over the last week too, and also in the context of learning from tiny data sets. I'm thinking that humans are actually rubbish at classification from small data sets, but we kid ourselves that we are good at it.
@alienkishorekumar
@alienkishorekumar 7 жыл бұрын
I almost thought Jeffery Hinton was in this video.
@SirajRaval
@SirajRaval 7 жыл бұрын
he is in spirit
@y0d4
@y0d4 7 жыл бұрын
i liked your video because of your passion which you show in ~16 min :)
@mihaitensor
@mihaitensor 7 жыл бұрын
Backpropagation algorithm was invented in the 1960's. Hinton showed in 1986 that backpropagation can generate useful internal representations of incoming data in hidden layers of neural networks. He didn't invented backpropagation.
@apachaves
@apachaves 7 жыл бұрын
Excelent video again, thank you Siraj. I definitely agree we should do more exploration, and I hope one day to contribute in that sense.
@hypersonicmonkeybrains3418
@hypersonicmonkeybrains3418 7 жыл бұрын
Also you have to consider Morphic Resonance. Morphic resonance, Rupert Sheldrake says, is "the idea of mysterious telepathy-type interconnections between organisms and of collective memories within species.
@douglasoak7964
@douglasoak7964 7 жыл бұрын
Its not sparse. its focused. babies start with a pre-programmed classifier (faces). This classifier is faces. Positive/negative. Thats why they are so focused on faces. From this simply classifier, the baby move on to build its own classifiers. Essentially a general AI will be a classifier fed by another simply classifier with the ability to build off of the initial classifier to build new classifiers.
@sortof3337
@sortof3337 6 жыл бұрын
What about babies who are born blind? So, how do they function and become conscious?
@zacharykeener1990
@zacharykeener1990 6 жыл бұрын
I believe this example can be expanded to a number of "pre-programmed" classifiers, e.g. those things that the senses respond to. So blind/deaf/touch as the senses and the physical world as the pre-programmed classifiers
@victorocampo5263
@victorocampo5263 7 жыл бұрын
Notice me senpai!!!!
@SirajRaval
@SirajRaval 7 жыл бұрын
hi Victor!!
@whiteF0x9091
@whiteF0x9091 7 жыл бұрын
Great presentation ! Thanks
@guitarheroprince123
@guitarheroprince123 7 жыл бұрын
cmon siraj tomorrow is my computer architecture test and I had to study but u dropped an awesome video.
@guitarheroprince123
@guitarheroprince123 7 жыл бұрын
well fkit cause internet > college.
@KatySei
@KatySei 7 жыл бұрын
Amazing video siraj.
@RAJATTHEPAGAL
@RAJATTHEPAGAL 6 жыл бұрын
Thanx . Thats all the inspiration i needed...... And as for backprop ya just like u i too started to think maybe we r overusing and more and more models are arching towards it. O.o and i m just learning now.
@y__h
@y__h 7 жыл бұрын
Man you should totally cover Google's TPU impact on Machine Learning-specific hardware. Heck, Nvidia recently started TPU-like open source ML accelerator called NVDLA.
@SirajRaval
@SirajRaval 7 жыл бұрын
will consider
@RoulDukeGonzo
@RoulDukeGonzo 7 жыл бұрын
Seen the early stuff from Hofstadter? The concept network and workspace ideas are really cool.
@larryteslaspacexboringlawr739
@larryteslaspacexboringlawr739 7 жыл бұрын
thank you deep learning research video
@shirshanyaroy287
@shirshanyaroy287 7 жыл бұрын
I watched this video on 2x speed just like you suggested. I feel like a god.
@451shail
@451shail 7 жыл бұрын
Such an interesting video!
@squirrel2770
@squirrel2770 7 жыл бұрын
Awesome, appreciate your work Siraj, inspiring! I need to get friendlier with Khan Academy and get back into math...*shudders*. Would you be able to recommend things to cherry pick and learn for these purposes, or any order of things perhaps? Or would you consider most concepts to have dependencies that would make cherry picking questionable? For an example would you jump to trying to figure out the derivative and partial derivative?
@Leon-pn6rb
@Leon-pn6rb 7 жыл бұрын
yo I legit thought in the beginning that the bald guy behind him was actually a co host , standing behind him , waiting to speak next I looked at him for far too long to realize that he was just a photo in the article
@giraudl
@giraudl 7 жыл бұрын
Really love how you explain Chain Rule. 10^∞ Kudos !
@SirajRaval
@SirajRaval 7 жыл бұрын
thanks Luc!
@cupajoesir
@cupajoesir 7 жыл бұрын
but is our acceleration in learning being outstripped by the acceleration of the accumulation of information thus making it impossible to ever grasp full understanding?
@cdrwolfe
@cdrwolfe 7 жыл бұрын
Great vid and interesting discussion points. For those interested in evolution and its application in the brain I always recommend 'Chrisantha Fernando' (now of google) and his past work on 'Darwinian Neurodynamics'
@Magenta1593
@Magenta1593 7 жыл бұрын
Really good video!
@ladjiab
@ladjiab 7 жыл бұрын
Hi Siraj I have a question, do you think it's safe to train our models on Google Machine Learning engine ? wouldn't that allow Google's AI to know everything about our products since it's kinda learning from our models.
@Fuckutube547465
@Fuckutube547465 7 жыл бұрын
I had a feeling you would have this view on Hinton's comments. While back-propagation is extremely useful with today's processing capabilities, its real world applications weren't too great in '86. I hope that the success of the 'next backprop' won't be dependent on brute force capabilities, 30 years later. Thanks for bringing attention to this topic, looking forward to the next one!
@Visaals3
@Visaals3 7 жыл бұрын
Yo the line in the derivative graph wasn't tangent to the original function. I doubt anyone here doesn't know what derivatives are, but just in case, that would probably be confusing.
@leajiaure
@leajiaure 6 жыл бұрын
Maybe in the short term we will use computers to augment our abilities (as we have always done with technology), but machines absolutely can and will be capable of creativity that far exceeds ours. There is no task that cannot be done better by an AI.
@Neptutron
@Neptutron 6 жыл бұрын
What about FPGA's? Why aren't people making a big deal out of them for deep learning hardware? They'd be the perfect fit; you could just update them via software! Imagining entire batches of neural networks in single clock cycles!
@whickked
@whickked 7 жыл бұрын
Really appreciate you breaking down Deep Learning and AI concepts as well as recommending blogs, books, and articles to check out. You're the man!
@aamir122a
@aamir122a 7 жыл бұрын
You are asking the right question brain do not back propagate and they do not require millions of example to run. You should really look at what Numenta and cortical IO is doing. Setting everything aside just they way they encode data is what you refer to sparse input. To get you started here is the link to one of the paper from Cortical IO on how natural language is encoded to sparse representation. www.dropbox.com/s/ikbe1ney8u5d35h/semantic-folding-theory-white-paper.pdf?dl=0
@alienkishorekumar
@alienkishorekumar 7 жыл бұрын
Learning deeplearning from deeplearning.ai and soon I'll do the fast.ai course too.
@tunestar
@tunestar 7 жыл бұрын
Cool, but none of them will you teach you RL which is what you should be doing according to this video.
@alienkishorekumar
@alienkishorekumar 7 жыл бұрын
Alejandro Rodriguez I'm planning to take it step by step, going from first principles.
@xiobus
@xiobus 7 жыл бұрын
there all good tools to learn. .. learn it all
@SirajRaval
@SirajRaval 7 жыл бұрын
keep it up
@Vadatajs666
@Vadatajs666 6 жыл бұрын
this is way strategy i used for my first BUGS project ... fun its today great idea :D just spawn ton of mutants until cpu die and chose bestest. theere interesting ideas emerge how to peed up evolution
@ajaymishra7212
@ajaymishra7212 7 жыл бұрын
For more on Differentiable Neural Computers deepmind.com/blog/differentiable-neural-computers/ www.nature.com/nature/journal/v538/n7626/full/nature20101.html
@skipmonday6467
@skipmonday6467 7 жыл бұрын
please can u help me with algorithm search engine use to classify complex queries by user? thanks in advance
@KellenChase
@KellenChase 7 жыл бұрын
Every single time I think to myself "Why isn't anyone talking about X or Y" in ML/AI/DL research you come out with a well done, entertaining, easily explained video summarizing and waxing poetic on the subjects giving me ever more rabbit holes to go down. You are awesome. thank you for doing what you do. Started a meetup in my city to discuss AI and many of your videos will be shared. Thank you for the Book reco as well. I will be listening to it on Audible at 3x. By the way, I just finished Max Tegmark's Life 3.0 and would highly recommend it if you haven't yet read it.
@jordanparker211
@jordanparker211 7 жыл бұрын
Could back-propagation be a viewed as a CompSci implementation of an adaption to linear regression with a transformation applied? (1) The gradient decent optimization of the loss function analogous to a least squares estimator; (2) The weights equal to matrix B and the bias equal to matrix A in equation Y = A + BX + E, where E equals a matrix of error terms; (3) The application of the non-linearity being the transformation function in Z = g(Y) It seems like there is similar thought in linear regression and auto-regressive time-series modeling? Please tell me if i'm wrong! P.S. love the vidz brah
@Anonymous-lw1zy
@Anonymous-lw1zy 7 жыл бұрын
Hinton contributed a massive amount to NNs - but not backprop. During the 1980's upswing in NNs, there was a lot of discussion (and conflicting claims) of who first used backprop for NNs. The agreement finally pretty much concluded it was Werbos. See this for a good historical summary. people.idsia.ch/~juergen/who-invented-backpropagation.html
@realcygnus
@realcygnus 7 жыл бұрын
great channel !.......Raj, just out of pure curiosity do you have a "computer science" Degree ? or are you still a student ?......or are you just a natural self taught/DIY !?
@gangadharasai9372
@gangadharasai9372 7 жыл бұрын
Siraj you are Awesome !!!! and one qn ........... Is back propagation inspired from the neural activity of brain ...... does back propagation happen in our brain? i am not able to find an answer?
@natesh31588
@natesh31588 6 жыл бұрын
Allow me to add one more option from a hardware perspective. To build on to that requires clearing up some fundamental issues. You say everything is a function. The more accurate statement is that everything can be described as a function. The key word being description. Neural network algorithms are computational descriptions on how learning can be achieved to satisfy an input-output mapping. The option I propose is trying to understand the underlying physical (thermodynamic) process that we end up describing as learning. For eg: a refrigerator taking in electrical energy to cool things down can be described computationally using an input-output function and implemented using a transistor circuit. I can also always build a refrigerator to take in power and cool things down. Both my circuit and the refrigerator are now doing the same thing computationally but only one of them will actually cool things down. So why not attack the question of general intelligence the same way? Is it possible to build a hardware system that satisfies specific thermodynamic (energy/power/work/heat) conditions so that their dynamics can now be described as learning. For fun, let's call this system a thermodynamic computer.
@santicomp
@santicomp 7 жыл бұрын
Hi Siraj, I love your videos they are very inspiring, i hope i can finish software engineering and continue with ai applications. I have a question what do you think about a general ai that could think like a human, generate programs automatically, do you think a human(programmer) would be somehow excluded completely and lost it´s creative value. I understand making automatism for things that are monotonous or that a machine could do better, but if we get replaced by having a machine do everything were would we fit?. This question came from a debate which i had with a probability and statistics teacher that asked what was the future of ai, i responded talking of present ai, and his response was grim, like there was no hope in the future and we were contributing to this dispare of humanity. He also said few people would be in control of ai as in the feudal ages and we would all be like slaves. Of course i think its a bright future, i tried to respond positively, but he was hard to convince .I´m really intrigued by your opinion, maybe i can change his belief about the grim future he thinks is coming Keep it up Cheers from Uruguay South America.
@randcontrols
@randcontrols 7 жыл бұрын
Awesome video Siraj. What I like to see is more collaboration between the deep learning and complex adaptive system worlds. This video focusses on the deep learning world, so let me give a one sentence intro on complex adaptive systems. In a complex adaptive system, complexity arises, almost from nowhere, from simple interactions between "agents". Netlogo is a very simple modeling environment with simple examples demonstrating the concept. For example how the complex behavior of the flocking of birds can be simulated using very simple rules. On the other end of the scale is the very ambitious www.futurict2.eu project that aims to simulate the world to solve global socio-economic problems. Deep learning is really very exciting and I am truly amazed by its achievements. My 2 cents on the subject is that 20 percent of the 20 percent that you want to move from exploitation to exploration, should explore combining deep learning with complexity science.
@arzoo_singh
@arzoo_singh 7 жыл бұрын
Siraj Great work ,just a small feedback speak slowly and give sometime ...lets say you explaining about regression speak slowly and give a pause in between .
@chrismiles3838
@chrismiles3838 5 жыл бұрын
Artificial Life is the closest field studying the topic that you concluded as the most promising direction. It's an exciting field! I would love to see more interactions between AI and AL. Siraj, this could possibly be an interesting topic for a video?
@milanpospisil8024
@milanpospisil8024 6 жыл бұрын
But unsupervised and supervised learning are sometimes tight together. For example prediction of future is classification on unlabeled data (you just want to predict next state of system using unlabeled sequence of states in time). And I think, thats what the brain does.
@salmanrahim3540
@salmanrahim3540 7 жыл бұрын
Cerebrum is a decentralized platform for crowdsourced machine learning that is implementing multi-agent systems (based on principles of artificial life). Check it out at cerebrum.world
@rahul.chandak
@rahul.chandak 7 жыл бұрын
Nice video, just started learning ML, can you make a video explaining how to decide the hidden layer,when to assign a base value and how to decide it, and also how to assign initial weights with some practical examples. If any link available already, that will help too :)
@Barnardrab
@Barnardrab 7 жыл бұрын
I didn't know Steven Pinker wrote books. I think I remember seeing him in a few Big Think videos. I may be picking up How the Mind Works.
@unclemax8797
@unclemax8797 6 жыл бұрын
it's was mathematically proven that kohonen and kmeans are the same........mmmmmmm, ai reinvented the wheel! and it's the case with lots of ai tools the problem is, if you say ' i used ai and kohonen' you are supposed to be a god, if you say ' i used a kmeans' you are just told you are an old fashion moron times are tough, huh?
@ivy3420
@ivy3420 7 жыл бұрын
yeah couldn't agree more. math is freaking beautiful. so is physics and biology and chemistry and computer science and engineering and artificial intelligence and deep learning.
@mashmoompathan2052
@mashmoompathan2052 7 жыл бұрын
Hey siraj, i need to select a deep learning algorithm for image (image is basically of computer sceen, which has characters, objects of diff shapes etc ) which algorithm do u think will be most suitable in my case. Please reply!
@ajaymishra7212
@ajaymishra7212 7 жыл бұрын
It seems that functionalism is a form of behaviorism, get out from blanket of behaviorism, then one can do some cools stuffs instead of creating metaphors that fools one that it is like the real intelligence[ Metaphors - Learning, rewards, reinforcement]; Computer only do what they are told to, this is quite simple principle which dont let you believe that metaphors are truth.
@hypersonicmonkeybrains3418
@hypersonicmonkeybrains3418 7 жыл бұрын
You say its amazing how babies can learn from sparse unlabeled data. Isn't it possible that vast amounts of data is somehow transferred from thousands of generations of ancestors, genetically on a DNA level. Maybe its not unlabeled data, maybe baby's the brain has a big headstart.
@moejobe
@moejobe 7 жыл бұрын
This might be your best video im terms of depth.
@sparax7870
@sparax7870 7 жыл бұрын
hey siraj , loved your video amazing, its so fun can please you do a video on affective computing, i think if you want computers to mimic humans in terms of intelligence then affective computing is the way this is a nice publication from mit , you might like it affect.media.mit.edu/pdfs/95.picard.pdf
@javzav2026
@javzav2026 7 жыл бұрын
what if we use that multidimensional algorithm to group together nueral net connections (layers) have them compute normaly. Except have the grouping be for the task's a particular weight models.
@javzav2026
@javzav2026 7 жыл бұрын
next level ai?
@tthtlc
@tthtlc 6 жыл бұрын
You make go crazy again....emotionally motivating. Good for my neural network.
@getrasa1
@getrasa1 7 жыл бұрын
Siraj, you were talking a lot about those functions and that they are everything in life, where can I find more about them and how they relate to neural networks?
@debarokz
@debarokz 7 жыл бұрын
wow.... great talk !!! I wish I had enough money to become a patron.. makes me feel bad
@rgrimoldi
@rgrimoldi 7 жыл бұрын
are different "weights" and "layers" the same thing?
@rgrimoldi
@rgrimoldi 7 жыл бұрын
Or is it that each layer is a function that contains each different weight
@shreyashervatte5495
@shreyashervatte5495 7 жыл бұрын
Hey Siraj! Thank you so much for making videos like this.. Your videos really inspire me to do more with my life... The amount of information that's out there.. So much to learn... You're influencing lives out here.. Keep up the good work!!
@SirajRaval
@SirajRaval 7 жыл бұрын
awesome thanks!
@Stan_144
@Stan_144 3 жыл бұрын
Jeff Hawkins has a solution. His new book came out just yesterday ..
@ayushman_sr
@ayushman_sr 6 жыл бұрын
But why partial derivative wrt weights can i understand in simple logic.. please
@AinunNajib-ec2ainun
@AinunNajib-ec2ainun 7 жыл бұрын
WOW, just wow I lose faith in deep learning just before watching this video Actually there is more more things to optimize! Thanks for making this video
@SirajRaval
@SirajRaval 7 жыл бұрын
so awesome!
@BFBCIE2
@BFBCIE2 7 жыл бұрын
Where can I find the document he is using in the video? The info would be super helpful!
@danishahmed5068
@danishahmed5068 6 жыл бұрын
The name of video is a bit misleading. I realized after i was 20 mins thru meaning i lost tat time :(
@I77AGIC
@I77AGIC 7 жыл бұрын
i really enjoy videos like this. you can't really find these kind of discussions often
@eduardmart1237
@eduardmart1237 7 жыл бұрын
please make a video about text(document) classification!)
@alzheimancer
@alzheimancer 7 жыл бұрын
The best explanation of the Back-propagation I've ever seen
@voidnoire7256
@voidnoire7256 7 жыл бұрын
The game shown in 40:25 reminds me of kzbin.info/www/bejne/a17LZpmsrZ6WeNU
@BocaoLegal
@BocaoLegal 7 жыл бұрын
Love you Siraj, you are doing a job that no one else do.
@nikhilsoni7037
@nikhilsoni7037 7 жыл бұрын
Amazing videos. You are an inspiration. Are you coming to India soon?
@elektronik2000
@elektronik2000 7 жыл бұрын
Siraj you becomes much better at explanation !!
@tholienguitar
@tholienguitar 7 жыл бұрын
Could a video about Hinton's capsules be coming?
@spetz911
@spetz911 7 жыл бұрын
how DP is a Hinton's invention if he is not present in DP wikipedia article?
@inamothosan
@inamothosan 7 жыл бұрын
Learn something new today...thanks Siraj
@siarez
@siarez 7 жыл бұрын
This makes me wonder why you think HTMs are a joke. What's the reason for them not to be worthy of further research?
@shafeeza136
@shafeeza136 7 жыл бұрын
Thank you for your videos. I am learning a lot from them :)
@JakeyCroft
@JakeyCroft 6 жыл бұрын
Babies and children die without supervision. They need their parents to tell them how something works before they can actually learn doing said task. Even as children they often need you to look at the pictures they drew and „rate“ them. They need feedback all the time so they can adjust. Same thing with talking. They say something, you correct them, they say it again and so on. So in my opinion, they are not able of doing unsupervised learning without seeing how others do it or getting feedback.
@JakeyCroft
@JakeyCroft 6 жыл бұрын
Also, a baby recognises (like a dog, too) if a voice is positive or negative. So they‘ll get feedback no matter what exactly their parents are saying
@asleepius
@asleepius 7 жыл бұрын
Much love Siraj, you put alot of work into everything you do. If I where your parent and you came to visit, I would slightly lower my newspaper and pull my reading goggles to the tip of my nose and give you nod in unapologetic validation.
@SirajRaval
@SirajRaval 7 жыл бұрын
hah thanks Jordan!
@harshtiku3240
@harshtiku3240 7 жыл бұрын
16:25 Whenever I see a Siraj Video!
@SirajRaval
@SirajRaval 7 жыл бұрын
lol
@harishshankam6268
@harishshankam6268 6 жыл бұрын
No words for ur video.Excellent
@bosepukur
@bosepukur 7 жыл бұрын
hope you have a million subscribers :)
@justinfrancis5834
@justinfrancis5834 7 жыл бұрын
1,000,000,000,000,000,000 nine zeros in a billion right?
@MLDawn
@MLDawn 3 жыл бұрын
You know that he is not the only inventor of backprop right?
@fabfan12
@fabfan12 7 жыл бұрын
Awesome video. I love the enthusiasm!
@solid8403
@solid8403 7 жыл бұрын
love is a function. Awesome stuff.
@bernardofn
@bernardofn 7 жыл бұрын
Thanks Siraj. Very insightful video. I watched GH's interview to ANg two weeks ago and that kept me wondering about the new directions! :-)
@SirajRaval
@SirajRaval 7 жыл бұрын
thanks Bernardo!
@icyrich4456
@icyrich4456 7 жыл бұрын
thx for another portion of knowledge
@sgaseretto
@sgaseretto 7 жыл бұрын
I'll really love to see videos of you talking more about this, like the other, more experimental learning algorithms out there. For example, more about: - Synthetic Gradients (that you already have mentioned in this video) - Feedback Alignment - Target Propagation - Equilibrium Propagation - and others
@Erilyth
@Erilyth 7 жыл бұрын
Great video Siraj! One small question though, in self organizing maps, could we just extend it to higher dimensions instead of just a 2D map, following the algorithm I don't see any issues extending it to higher dimensions.
@hanyuliangchina
@hanyuliangchina 7 жыл бұрын
anyone know that how to build an OpenAI Gym training environment in the cloud? such as training mario, and download agent, and run this agent on myself computer ?
Synthetic Gradients Explained
27:16
Siraj Raval
Рет қаралды 21 М.
The Turing Lectures: The future of generative AI
1:37:37
The Alan Turing Institute
Рет қаралды 606 М.
ДЕНЬ УЧИТЕЛЯ В ШКОЛЕ
01:00
SIDELNIKOVVV
Рет қаралды 3,1 МЛН
Incredible: Teacher builds airplane to teach kids behavior! #shorts
00:32
Fabiosa Stories
Рет қаралды 11 МЛН
Офицер, я всё объясню
01:00
История одного вокалиста
Рет қаралды 5 МЛН
Numenta Explained
25:32
Siraj Raval
Рет қаралды 50 М.
This is why Deep Learning is really weird.
2:06:38
Machine Learning Street Talk
Рет қаралды 389 М.
The quantum revolution - with Sean Carroll
56:17
The Royal Institution
Рет қаралды 63 М.
Capsule Networks: An Improvement to Convolutional Networks
22:44
Siraj Raval
Рет қаралды 142 М.
Heroes of Deep Learning: Andrew Ng interviews Geoffrey Hinton
39:46
Preserve Knowledge
Рет қаралды 153 М.
Convolutional Neural Networks - The Math of Intelligence (Week 4)
46:04
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs
1:39:39
Machine Learning Street Talk
Рет қаралды 62 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 599 М.
LSTM Networks - The Math of Intelligence (Week 8)
45:03
Siraj Raval
Рет қаралды 176 М.