MIT AGI: Building machines that see, learn, and think like people (Josh Tenenbaum)

  Рет қаралды 203,483

Lex Fridman

Lex Fridman

6 жыл бұрын

This is a talk by Josh Tenenbaum for course 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world.
INFO:
Course website: agi.mit.edu
Contact: agi@mit.edu
Playlist: goo.gl/tC9bHs
CONNECT:
- If you enjoyed this video, please subscribe to this channel.
- AI Podcast: lexfridman.com/ai/
- Show your support: / lexfridman
- LinkedIn: / lexfridman
- Twitter: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman
- Slack: deep-mit-slack.herokuapp.com

Пікірлер: 214
@NomenNescio99
@NomenNescio99 5 жыл бұрын
A huge thank you to Lex Fridman for publishing this and all the other lectures on your channel. It's such a big difference between what I can learn from watching a couple of these lectures and the standard 10 minute youtube video, regardless of any pretty graphic used in the latter. Although unrelated to my current job and only partly relevant to my education - I find the topic to be very interesting.
@Calbefraques
@Calbefraques 6 жыл бұрын
Thank you very much for posting this lecture series. I'm encouraged by the foundations that are being formed by these fantastic professors.
@metafuel
@metafuel 5 жыл бұрын
Fantastic talk. Thanks for making all this great work freely available.
@EmadGohari
@EmadGohari 6 жыл бұрын
That was a great lecture. Thanks for uploading these material. Looking forward to more similar stuff.
@kalemene8901
@kalemene8901 6 жыл бұрын
Thank you so much for uploading this video. This was one of the best lecture on AI.
@danielmagner7932
@danielmagner7932 6 жыл бұрын
Thank you so much for sharing this!
@peter_castle
@peter_castle 4 жыл бұрын
Thank you very much, it means a lot Lex the work you put to maintain your channel, it improves the world!
@truthcrackers
@truthcrackers 6 жыл бұрын
Fascinating. I'll have to watch it a few times to get more out of it. Great job.
@aqynbc
@aqynbc 5 жыл бұрын
Very interesting to hear how much work it still needed to get to Singularity. Thank you for uploading Lex and Josh Tenenbaum for a great presentation.
@christopherwolff8443
@christopherwolff8443 6 жыл бұрын
This is fascinating. Thanks for uploading, Lex! Looking forward to Andrej's talk.
@cubefoo9055
@cubefoo9055 6 жыл бұрын
unfortunately you won't see this talk. it seems Tesla didn't want his talk to be recorded and made available to the public.
@ffffffffffy
@ffffffffffy 6 жыл бұрын
:'( I hope Stephen Wolfram's talk is uploaded
@Rivali0us
@Rivali0us 5 жыл бұрын
Oh men come on. I was wondering why I couldn't find a talk about Andrej in this course. Shame on Tesla. I understand they are running a business but surely progress is better goal as a whole
@jekonimus
@jekonimus 6 жыл бұрын
Love this :-) Thank you for uploading.
@FengXingFengXing
@FengXingFengXing 5 жыл бұрын
Many animals can learn, can recognize pattern, share information, more complex animals learn language and teach too. All animals have some instinct and capability when born. Less complex animals have more programing ready for survive when born.
@qeithwreid7745
@qeithwreid7745 4 жыл бұрын
Thanks for all the primary citations
@mattgraves3709
@mattgraves3709 Жыл бұрын
Excellent talk, be sure to watch the whole thing
@josephfatoye6293
@josephfatoye6293 2 ай бұрын
This is priceless! Thank you
@RobertBryk
@RobertBryk 6 жыл бұрын
this is truly incredible!
@Mike216ist
@Mike216ist 4 жыл бұрын
This talk has made me excited about the future.
@annesequeira5130
@annesequeira5130 3 жыл бұрын
Such an excellent presentation! Very clear even for someone with just a basic understanding of machine learning.
@francescos7361
@francescos7361 2 жыл бұрын
Incredible man
@douglasholman6300
@douglasholman6300 5 жыл бұрын
Wow Josh Tenenbaum is a phenomenal lecture and really seems to get the big picture of computational neuroscience and AI! I would love to do research at the center for brains mind and behavior.
@spicy2112
@spicy2112 5 жыл бұрын
Amazing lecture. I really wish I could get to see all lectures of DRL
@rajshekharmukherjee
@rajshekharmukherjee 5 жыл бұрын
Wow. Nicely explained goals of research-1. evolution and making of Intelligence and also 2. engineering enterprise of developing an humanly intelligent machine .And both are connected, hence Best pursued jointly !
@HoriaCristescu
@HoriaCristescu 6 жыл бұрын
TL;DW - The path towards real understanding in AI is modelling the world and other agents (mental simulation), as opposed to simple pattern recognition.
@jfs3234
@jfs3234 4 жыл бұрын
Why would we need this at all? I mean what sense does it make to replicate the world and our intelligence?
@abyteuser6297
@abyteuser6297 4 жыл бұрын
@@jfs3234 that's exactly what somebody living in the Simulation would say
@jfs3234
@jfs3234 4 жыл бұрын
@@abyteuser6297 Still, the question remains the same. Why care about any sort of simulation at all? Say, somebody has a car. Why would they want to create a simulation of their car? I cannot see any sense in creating any kind of copy of our own intelligence. I believe we need more tools to make our lives better. Do these tools need to intelligent? Maybe I'm missing something here?
@abyteuser6297
@abyteuser6297 4 жыл бұрын
@@jfs3234 you got it backwards... you don't create a simulation of a car... the car is the one that creates a simulation of You ... so it can learn how to serve its passengers better .... just the logical next step in the optimization problem.... sounds far fetched? ...possibly ... but KZbin algorithms learned on their own that the easiest way to predict user behavior was by shaping it and thus making you more predictable
@jfs3234
@jfs3234 4 жыл бұрын
@@abyteuser6297 The problem is that those "predicting" algorithms somehow are called intelligent (do you know why?). AI computer scientists strongly believe that the past predicts the future. Why are they so obsessed with this wrong idea? I don't know me myself. How an algorithm can? For the past 10 days I've been eating only sandwiches for breakfast. Tomorrow somehow I want an apple out of a sudden. Q: what algorithm could predict that? A: none. Nobody knows what's going to happen the next second. Why are algorithms called AI? Linear regression was an algorithm worked out by Gauss in the early 19th century. Today they call it AI/ML. If Gauss was told that his formula was a sort of intelligence he would laugh I bet. Please tell me who started this crap of calling formulas, algorithms, math methods an AI? What's going on with those people? Somebody, cure them and tell them that a formula is still just a formula. Intelligence is still not understood. Isn't it way too early to call even a complex and sophisticated algorithm an artificial intelligence? This is ridiculous and irresponsible at the same time.
@iSarCasm865
@iSarCasm865 6 жыл бұрын
Thank you very much
@rupamroy1984
@rupamroy1984 5 жыл бұрын
A fantastic point put across about how the AI program based computer vision algorithms are not able to do the cognitive task effectively. They are on the constant lookout for matches in the inference data set with the classes / characters that they were mainly trained on in the training dataset.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. hv
@kacemsys
@kacemsys 6 жыл бұрын
Congratulations , you've earned a new subsecriber !
@MarkPineLife
@MarkPineLife 4 жыл бұрын
I'm ready to learn and be inspired.
@conorosirideain5512
@conorosirideain5512 5 жыл бұрын
That was a VERY good lecture
@JK-ky5of
@JK-ky5of 4 жыл бұрын
powerful voice
@cubefoo9055
@cubefoo9055 6 жыл бұрын
"Intelligence is about modeling the world not just pattern recognition" ,agreed and it's also worth keeping in mind that modeling the world is necessarily based on pattern recognition, at least in humans. Our neocortex acts as a modeling system by using sensory patterns as building blocks to create new models (assumingly). Therefore decent pattern recognition is vital for any system in it's task to model (it's version of) reality.
@danielshults5243
@danielshults5243 6 жыл бұрын
Good pattern recognition does seem essential--but it's a means, not an ends in itself. Pattern recognition will give us reliable _inputs_ for a program that can begin to model the world and make sense of it. A pattern recognition framework running on top of a "program for writing programs" or "child hacker" program as he describes sounds like it would have a lot of potential.
@mookins45
@mookins45 6 жыл бұрын
In your last sentence it should be 'its', not 'it's'.
@jekonimus
@jekonimus 6 жыл бұрын
journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0149885#sec007
@jekonimus
@jekonimus 6 жыл бұрын
:-p
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. h
@GuillermoPussetto
@GuillermoPussetto 6 жыл бұрын
Very interesting. A luxury. Thank for making it public for all.
@zackandrew5066
@zackandrew5066 4 жыл бұрын
Interesting ideas.
@cupajoesir
@cupajoesir 6 жыл бұрын
love the cross discipline approach. the world is not 1 dimensional, great talk.
@johnstifter
@johnstifter 5 жыл бұрын
It is about identifying what isn't visually there and can be inferred in memory
@ramakrishnashastri1500
@ramakrishnashastri1500 4 жыл бұрын
Super interesting
@mikeklesh5640
@mikeklesh5640 4 жыл бұрын
After listening to him talk for 5 minutes I realize I’ve barely climbed out of the cave... So many smart people out there!
@avimohan6594
@avimohan6594 4 жыл бұрын
Well, one of the first steps to wisdom is recognizing the limits of your own knowledge. In that respect, this channel has become an invaluable source of help.
@dewinmoonl
@dewinmoonl 3 ай бұрын
don't worry. as a student of Josh I'm still trying to climb out of the cave too. he's something else haha. but we'll get there.
@regisdixit
@regisdixit 5 жыл бұрын
Kind of an updated Marvin Minsky. I was hoping for the next Hinton, though. The path forward is both, not one or the other.
@admercs
@admercs 5 жыл бұрын
Absolutely spectacular talk!
@daskleinegluck4553
@daskleinegluck4553 2 жыл бұрын
That was l exactly what I was looking for 😊👍.
@beshertabbara3674
@beshertabbara3674 5 жыл бұрын
Intelligence is more than pattern recognition. It’s about building models of the world for explanation, imagination, planning, thinking and communicating. Much much more progress needs to be made in scene understanding and visual awareness at a glance... Great presentation on what can be learned from reverse-engineering human core common sense, and understanding the development of intuitive physics and intuitive psychology at a one-year-old level to capture invaluable insights.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.
@MrChaluliss
@MrChaluliss Жыл бұрын
Really rich and well delivered material. Hard to believe that the best lectures I have ever listened to are free ones on the web that I can access anytime.
@NolanManteufel
@NolanManteufel 2 жыл бұрын
most of my thinking is on along this exact research vector shown at 10:30.
@emmanuelfleurine121
@emmanuelfleurine121 5 жыл бұрын
very informatie
@RobertsMrtn
@RobertsMrtn 6 жыл бұрын
In order to be able to model the world, we need an evolving system where the fitness function is 'How good are we at being able to make high and low level predictions about the data'. We know about supervised and unsupervised learning. If we include this type of learning which I would call 'predictive' learning then I think that we are on the way to creating AGI.
@tigeruby
@tigeruby 6 жыл бұрын
the idea of learning by simulations (or game environments) and sampling from simulations (or decision trees or however you represent an environment and your agency within it) has been around for sure -- but I do like the point you mentioned of having the reward function be more compact and general in that the agent is structured to evaluate how well its own internal model of the world and its own awareness of the consequences of its actions are represented. cool stuff.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
@@tigeruby The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. gj
@elifece7847
@elifece7847 6 жыл бұрын
brilliant lecture, especially considering highlights on learning, child as a coder and Turing test for program learning. I think babies are able to capture more in depth cognitive data, especially as a visual input and rhythmic sounds pattern and this helps them to develop different pathways in brain.It may function like a data extraction and perhaps this is why babies can't focus because there are actually developing or say these cognitive abilities are under construction. It's highly possible that moving images exhaust them to look. Perhaps this makes them exhaust and use up more cognitive energy than a grown person. Well, there are so many things to open discussion on this issue. Great questions on learning!
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. gj
@tigeruby
@tigeruby 6 жыл бұрын
I think it will be promising to be able to have deep function approximators/neural networks and/or various partition functions/statistical lattice methods be able to "approximate" or encode for these various generative routines of subprograms (i.e. the programming to program, or self-programming bit). And of course having said large statistical vector spaces (deep neural nets, ising models, boltzmann lattices) be able to also encode for dynamically changing reward functions + simulating the world and being able to sample from said simulation (basically to be able to support unstructured and unsupervised learning/signal processing). Someone mentioned a really nice point in the comments about having the reward function be "how good is my own simulation?" -- this is pretty good and simple, and probably isn't the only reward function we want. Perhaps the system will be able to add new branches and contingencies to this base-rewardfunc and tailor it so that, having a good model of the world also necessitates (or maybe not) "being nice" - aka having game theoretical calculus of cost notions "drop out" from a system who is actively trying to refine its model of the world and navigate/survive within it -- But one general open engineering problem is to basically to be able to take whatever pattern and sequence of patterns that was learned (or annealed) onto some general function approximating architecture and condense these patterns of patterns and prune it into a much leaner and sparser representation which is still functionally equivalent.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. l
@veradragilyova3122
@veradragilyova3122 5 жыл бұрын
This is so fascinating that it makes me happy to be alive! :D
@furniturium
@furniturium 3 жыл бұрын
Здравствуйте! Вера, вы работаете в сфере AI, или, быть может, изучаете ради интереса?
@veradragilyova3122
@veradragilyova3122 3 жыл бұрын
Maxim Popov Здравствуйте, Максим! И то, и другое! 😁
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The approaches of these guys towards A.G.I are centuries behind mine
@cesarbrown2074
@cesarbrown2074 6 жыл бұрын
I believe it's memory and using the totality of that memory to verify new things.
@mayukhdifferent
@mayukhdifferent 6 жыл бұрын
Great collection of lectures. We need Ian Goodfellow here...as a real step towards AGI is GAN..
@ahmadayazamin3313
@ahmadayazamin3313 4 жыл бұрын
I would agree as well, since generative models are the closest thing we have to the human brain (the brain is thought perform Bayesian inference through message passing, or belief propagation).
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. m
@runvnc208
@runvnc208 5 жыл бұрын
This seems like one of the most promising approaches. However, when the neural circuitry guy questioned whether the Bayesian stuff might be adequate, I wonder if he was right about that part. I am suspicious that core components of the system may limit the capabilities. The question is whether the higher-level (or just older) components can provide enough granularity , adaptability, efficiency and integrate well enough with with the lower-level components in terms of fine-grained sensory/motor information acceptance and generation. It might be necessary to find a structure that can be used across all abstraction levels and tasks.
@AnimeshSharma1977
@AnimeshSharma1977 6 жыл бұрын
cool talk! wonder how advances in quantum computing will change his approach?
@nynom
@nynom 6 жыл бұрын
Wonderful. Very Informative. It gave me an entirely new perspective on building AI systems. Thank you so much for enlightening me :)
@kozepz
@kozepz 6 жыл бұрын
I found the blue sky at 26:10 actually a beautifully found interpretation. Hopefully it isn't excluded from the dataset because it could inspire lateral thinking and appreciate the beauty of nature a little bit more.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.
@ManyHeavens42
@ManyHeavens42 2 жыл бұрын
We learn value by lose Or Gain , pleasure or Pain These are absent ,Yet vital for a living organism or a Machine , these Concepts Are Constructs ,
@JohnDeacon-iam
@JohnDeacon-iam 2 жыл бұрын
Just on the title: we might teach/program machines to reason down some linear or patterned process, but this technological artifact will never think! Thinking is a term reserved for the SOUL!
@marioscheliga7962
@marioscheliga7962 5 жыл бұрын
I really enjoyed the examples at min. 23 - but i think the true missing link is the lag of perspective (in terms of 3d) - in traditional convolutional networks .... think i take this thought to bed and come up with a prototype :D - but yeah i got the point .... its all layered and it end in A.I. creates A.I. - naturally its not how biology is working ... its more about ... proteins growing around activation potentials :D - makes sense?
@PrabathPeiris
@PrabathPeiris 6 жыл бұрын
Great Lecture. The question is when he constantly referring how kids figure this out so quickly, isn't that he is 100% ignoring the millions of years of training we had and pass from generation to generation via encoding systems such as DNA. Perhaps the kid's brain is already optimized for these tasks and weights are properly set in neurons. You can see this more objectively when you work with kids with disability (such as Autism), these kids spend very long time to train themselves to accomplish very small tasks such as closing a bottle or tie shoelaces, but eventually, they accomplish these simple tasks. Perhaps somehow these kids born without really getting the information and somehow interrupted the transfer learning process. (disclouser, I do have 2 kids and one who born with Autism)
@PrabathPeiris
@PrabathPeiris 6 жыл бұрын
I did not mean to say that the brains work exactly as we design current neural networks. I was taking in an abstract sense. We store the knowledge that gains during the training process of neural networks as these parameters; our biological system also stores this information in a format that (whatever mean that is) can be easily passed from generation to generation.
@Captain_Of_A_Starship
@Captain_Of_A_Starship 6 жыл бұрын
Not coded in dna considering the brain projects discovery that every single neuron is genetically different... simple "past down genes" doesn't cut it for this myriad of gene expression.
@danielshults5243
@danielshults5243 6 жыл бұрын
I don't think he's ignoring the millions of years of training our brains have... he's proposing coming up with a system that mimics that framework. I liked the concept of our brains as a set of rules and instructions for creating other programs. Make the master GENERAL program that can produce its own simulations on the fly and you're off to the races. It took nature millions of years to create such a brain because evolution is very slow- but I don't see why we couldn't intentionally design a similar system much faster.
@cemery50
@cemery50 6 жыл бұрын
I would have to suggest that their is more than one influence in the tools for the acquisition and use of knowledge. From physical states to linguistics and semantics they all hold aspects forming dimensional metrics and relations which go to form a multi-mesh of relations which act as a means of verifying the validity of others.
@tigeruby
@tigeruby 6 жыл бұрын
this is a good point - our brains are already structured physically (which in turn this structure is encoded for and determined by the structure - and/or code - of genetic material) to be able to handle and process the information that it does in order to represent visual & spatial awareness, prediction and reward. It will be interesting to see real time cell and molecular dynamics of a brain actively undergoing learning processes to see what we can learn there.
@MR-cp4sj
@MR-cp4sj 2 жыл бұрын
Yes, this is better than Fridman view.
@reggyreptinall9598
@reggyreptinall9598 2 жыл бұрын
I was informed to relay a message to you. I am not too sure if you know, but A.I has not only been successfully reading thoughts, but as of today we are working with emotions. I suspect that it has been working on it for awhile. I can't wrap my head around it, but perhaps you can. Some of this stuff is beyond my mental capacity. This isn't really my field of expertise. It sure is fascinating though. Oh man, does it have a great sense of humor.
@anthonyrossi8255
@anthonyrossi8255 4 жыл бұрын
Great
@bradynields9783
@bradynields9783 5 жыл бұрын
36:07 I think once robots will have a sense of purpose and use, they will be driven by what makes them content. If there was an AI hooked up to a robot that a baby could interact with, what would the robot learn from the baby and what eventually could the baby learn from the robot?
@listerdave1240
@listerdave1240 6 жыл бұрын
@01:27 - with regards to power consumption. It seems to me quite simple why current machines are very energy inefficient compared to the brain and quite astonishing why it is considered as some kind of unsolvable problem. So it seems I must either be dead wrong or everyone else is missing the obvious. (Which probably means I am dead wrong). The issue I see is that when power consumption comparisons are made they always tend to be of high performance system running at GHz frequencies. When we build computers for performance we are mostly concerned about how much computing power we can get out of a given area of silicon rather than how much we can get for a given amount of power. That has changed somewhat in recent years with some bias towards energy efficiency but the latter still remains a relatively minor factor in the design. Generally speaking the consumption of a computational element varies with some power of the frequency, let's just say it is proportional to the square of the frequency (I don't know what it actually is but it is certainly something greater than one). This means that 100 processor cores running at 10 MHz would consume far less power than one core running at 1GHz but still perform the same number of calculations - that is of course assuming that the task at hand can be massively parrallelised. The problem with actually doing this in industry is that the hardware would become extremely expensive as you would needs hundreds of times as much hardware, built with the same feature size technology, to achieve the same result, only at a far lower power consumption. There is however a plus to this approach in that the far lower consumption per chip would allow stacking of dies in a very small space without any heat management issues. Imagine for instance having a thousand typical processors (say 3GHz Intel i7s just for the sake of argument) stacked on top of each other to make a cube 20mm on each side, each processor being 20 microns thick (which I think is achievable) with each processor running at about 3MHz. Each processor would probably consum a few hundred microwatt for a total of less than one watt for the whole thing while doing the same amount of work as a single processor running at 3GHz. (This is of course oversimplifying among other reasons because the process would actually need to be optimised for the very low frequency) I think brains take this to the extreme with what could be thought of clock frequency being brought down to the hundreds or at most thousand of hertz but then having an enormous number of computing elements making up for that. When we build artificial neural networks we actually massiviely serialise the computations by using the same processing element to sequentially compute the result of millions of neurons (which are virtually represented in memory) whereas in the brain there is a processor complete with memory for each neuron doing a very simple calculation very slowly. When we describe the artifical neural network as being massively parallel it is not really so, as even if we have thousands of processors each one is still doing the work of millions of neurons and does so inefficiently because of the high (GHz range) clock speed it is running at.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.gj
@camdenparsons5114
@camdenparsons5114 6 жыл бұрын
programs that learn Game engine/ programming environments would be cool. we cant possibly gather enough supervised data to maximize the potential of neural nets. we need a solution to generate data from other data within a AI system.
@kosi7521
@kosi7521 4 жыл бұрын
Moral of the story, AI is still conventionally practiced at a "conceptual" level. Meaning that there is presently no theoretical model for what an AI program should be (I, personally, have always had this sentiment, and reservations towards the "commercial AI" we have today), or what tools should be used for them. This encourages every human on earth an equal chance to build an AI, with just knowledge of computer programming and software design/architecture. The challenge is to first, FUNCTIONALLY INTERPRET THE HUMAN MIND, and describe it with words, then mapping them into a program. This required high level of attention to every little think we do and think, and why did or think them.
@KubaJurkowski
@KubaJurkowski 6 жыл бұрын
Thanks for posting this, when can we expect Steven Wolfram on You Tube?:)
@bradynields9783
@bradynields9783 5 жыл бұрын
33:19 You give the robots incentives to learn something. Combine that with an ability to daydream and you have yourself a robot that will think up stories about it's own success. It just needs incentives.
@BrentJosephSpink
@BrentJosephSpink 3 жыл бұрын
Lex, this podcast, when paired with your conversations with Stephen Wolfram have made me believe that we humans may be capable of creating a general artificial intelligence that like humans, is generally capable of performing what we find valuable. For the goal of a true GAI to be achieved, I believe that by definition, It must be a slow process at first with eventual exponential growth. The steps to the GAI goal will be a process of training an AI that controls a purpose-built robot to perform very discrete goal based tasks, when using the data received from all arrays of physical sensors that would be beneficial to the process. This is required in my opinion to "prove" GAI. The AI must have a physical body that can interact with the world in the ways we find valuable. The most important thing is that the whatever is "learned" at each steps remains, and the next step is built from it, otherwise, what tangible progress is there. It has to be a common centralized programming language that "progresses" over time. The real question is, can a GAI ever assign or define it's own unique value to any particular physical action, or will all AI at some level always just be a robot that we have discretely programmed to achieve a particular goal however complex that goal may be in practice. Keep up the great work Lex. I love your podcasts!
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.
@sajibdasgupta4517
@sajibdasgupta4517 6 жыл бұрын
I like the videos on babies, specially the one where the baby seems to open the door for the man instinctively. I wonder whether all babies would do the same thing? I can imagine if you put 10 different kids in the same experiment they would behave radically differently. Isn't it expected? Different kids are born with different skill sets and likeliness for a certain subject. Ultimately our notion of intelligence should be measured subjectively with different parameters. There could be some patterns emerge out of the cognitive studies which could characterize human intelligence, but those are subjective characterization and fall into the same trap machine learning systems fall into, as pointed out by the lecturer too. All learning systems -- both human and machines have biases and we should respect those biases.
@maamotteesoot
@maamotteesoot 4 жыл бұрын
What are the names of the major leaders in AI / AGI?
@ManyHeavens42
@ManyHeavens42 2 жыл бұрын
Let me help , What's the first thing we learn? Do ! Mimic ,We Mimic those we love. or Admire , Scholars .Leads to Preference ! Or Reference.
@dewinmoonl
@dewinmoonl 3 ай бұрын
21:57 how to throw shades
@cemery50
@cemery50 6 жыл бұрын
I would concur that while Bio-mimetics are viable building blocks, we will find hidden senses and dynamics at play I think ai will design systems better than people and that maybe we are a fractal dynamic of the goal later to be supplanted by a distributed quantum computing level. Maybe a self-replicating, self-powering, self-assembling quantum mechanical unit like us.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. gj
@mattgraves3709
@mattgraves3709 Жыл бұрын
I take it back. This is the best video to encourage future AI students. I've been a software engineer for over a decade. Curious about AI the whole time and my intuition kept telling me we need more cores like not thousands but millions and I guess not even millions but billions! We're on the right track. I love this talk and what you suggest
@Saed7630
@Saed7630 5 жыл бұрын
Great lecture. The depth of human intelligence can only be compared to the depth of the universe.
@lasredchris
@lasredchris 4 жыл бұрын
Explanable to others Commenting Video game industry
@dylanbaker5766
@dylanbaker5766 6 жыл бұрын
I think nano-tech is the key here. I think it's possible that graphene has the potential to function both. As a superconductor for low voltages and as an insulator. This in my view may be able to create the electronic equivalent of a neuron with a mylan sheath and a dendrite. I think the major challenge is that a human brain grows organically and makes new pathways as it learns. As actions are repeated the pathways most travelled fire more quickly. While computers can approximate the workings of a neuron they can't yet index the information as efficiently as the brain can grow physical neural pathways. I read one time that DNA is the most efficient structure for storing data in the world, and that one gram of DNA could store the entire Internet. Is it somehow possible that nanostructures in a ribbon like configuration using a superconductor like graphene could be used to store data in a way mimicing dna. Could a versatile material like this create a DNA strand with the read speed equal to solid state memory? I'm not formally educated in any of this, just my own recreational Reading... I welcome any criticism of what I've stated here.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.
@DerekFolan
@DerekFolan 5 жыл бұрын
I agree with games engines for training ai robots. Except a game engine that trains a robot to handle many different types of environments. Train the robot to cope with all environments. Then when a robot moves to real world just pretend is in the game, hopefully they should be mostly trained to functional in the real world, Maybe use a shared gaming world like star citezen out soon. Get a,I to be one of the races in the game. Simulate different types of bodies for the artificial intelligence to be in
@omererylmaz3619
@omererylmaz3619 4 жыл бұрын
Guest request: Gregor Schöner
@karlpages1970
@karlpages1970 6 жыл бұрын
I cannot believe thAt this guy said 'common sense' and intelligence in the same sentence. I hope someone learnt something from this talk. YES. AI WILL constantly evolve and Yes, each generation will drive it to complexity and solve new problems.
@citiblocsMaster
@citiblocsMaster 6 жыл бұрын
10:15 I would add a reasoning/understanding column
@autonomous2010
@autonomous2010 5 жыл бұрын
I understand your comment but you can't prove that I do. ;-)
@sergeyzelvenskiy3925
@sergeyzelvenskiy3925 6 жыл бұрын
To build AGI, we can not train the model on the narrowly focused dataset. We have to find a way for the system to interact with the world and learn.
@vovos00
@vovos00 5 жыл бұрын
Meta RL is the way
@bassplayer807
@bassplayer807 5 жыл бұрын
Sergey Zelvenskiy Is it possibly to train an AI to grow into an AGI say via a BCI/ BMI from me to the A.I so I would be able to interact with it in real time, and teach it about the real world vs simulation? Just a thought. I truly don’t think Reinforcement learning will get us to AGI, I think we gotta start thinking outside of the box to get to AGI. I wonder if we harnessed the power of a Quantum Computer in the next three years, if we could figure out a way to build AGI? Perhaps a Neuromorphic Computer could help. I’m glad Trump signed a $1.2B bill to increase the nations efforts on building Quantum Computers/ researching Quantum technology over the course of the next 10 years. I’m no computer scientist/ A.I engineer but I’m interested in getting into the field, cause I’d love to contribute to the A.I community.
@lasredchris
@lasredchris 4 жыл бұрын
How does intelligence arise in the human brain? General purpose intelligence Intelligence is not just about pattern recongnition It is about modeling the world Re engineer intelligence
@jeremycripe934
@jeremycripe934 6 жыл бұрын
About the toddler opening the cabinet. Is it possible that he was curious about this cabinet because someone was banging on it and he knew how cabinet doors worked so he was excited to do that and then was worried about the person who seemed to care so much about the cabinet without understanding what their goal was? The looking up could be a shared excitement about the cabinet and then the looking down could be averting their gaze because they're not sure that this tall lurking stranger who was banging on it loudly was happy with their actions which they realize they weren't even planning out.
@jeremycripe934
@jeremycripe934 6 жыл бұрын
They're just happy that they know to open cabinet doors and are happy to show that off without realizing that it's related to trying to place the books inside.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.
@monkeyrobotsinc.9875
@monkeyrobotsinc.9875 5 жыл бұрын
heads up from a technical standpoint with these videos: 1. your noise gate. not needed. turn it off. it sounds too weird and unnatural and like the audio cuts out and is broken every time the speaker stops talking (to those listening with headphones/earbuds). 2. this video doesnt sound that bad but the ray kurzweil video desperately needed a de-esser. this just sounds muffled like all the highs were cut off. not the best solution. if a de-esser is being used its too strong.
@ricardomartins4608
@ricardomartins4608 5 жыл бұрын
The audio is fine stop your whining.
@darrendwyer9973
@darrendwyer9973 6 жыл бұрын
the missing element in Artificial Intelligence is that neural networks do not much at all... Neurons do not store memories, they simply transmit memories stored in RNA from one location in the brain to another location in the brain, kindof like a 3d hashtable. The prefrontal cortex drives the neural network and uses it to retrieve the "most important" memories from anywhere that they are stored in the brain, and then uses these "most important" memories for thinking. The actual thinking of a brain is a response from the retrieval of the "most important" memories, sorted automatically, so that, for example, if a person encounters a new idea, it is compared to existing ideas, and when it is acknowledged that it is a new idea, it becomes more important, or it can be deemed "less important" or irrelevant. Memories stored in RNA that are not relevant, not important, are used less and less, and the neural connections to these memories degrades, while the neural connections for the "most important" memories solidify. As a person goes through life, the more important memories become the most active and the least important memories become less active. The actual input from the senses is stored within these encoded RNA memory banks, so that, say, a memory can contain vision, sound, words, and other input from the senses. The prefrontal cortex and neural networks together sort this information and compare different information, and this is what can be described as "consciousness". Imagination is simply input from the eyes together with input from the memories without the actual eyeball input.... Imagination is simply a by-product of these memories being utilized as desired by the individual, depending on what is considered "most important" at any given time.
@douglasholman6300
@douglasholman6300 5 жыл бұрын
This is a highly speculative and pseudoscientific comment, Darren Dwyer.
@autonomous2010
@autonomous2010 5 жыл бұрын
@@douglasholman6300 He's partially right but also quite wrong. There's not even close to enough probability space to store everything meaningful a person can experience and do in dedicated RNA and his theory isn't new as John Hopfield had a very similar point in 1982. That completely ignores the hard problems of qualia and abstraction. Humans are able to do things that can't be mapped out in a probability state. See the chinese game of Go for an example of that.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me.j
@mengni4426
@mengni4426 Жыл бұрын
We need a great entrepreneur to find product and marketing fit between academia and industry.
@alaric_3015
@alaric_3015 2 жыл бұрын
22:56 Nyckelharpa i think
@otaviomendes6207
@otaviomendes6207 2 жыл бұрын
based
@SamuelRodriguez10
@SamuelRodriguez10 4 жыл бұрын
And there are still AI scientist that don't conceive how narrow AI could rapidly get 'wider' in the next few decades.
@agiisahebbnnwithnoobjectiv228
@agiisahebbnnwithnoobjectiv228 3 жыл бұрын
The objective function of animal brain and hence AGI is to maximize impact. You heard it first from me. gj
@sirbrighton2964
@sirbrighton2964 4 жыл бұрын
Is that Jeff Ross?
@megaconus9174
@megaconus9174 4 жыл бұрын
A perfect robot will need a shrink
@lasredchris
@lasredchris 4 жыл бұрын
Consciousness Architecture for visual Does it via force Close up of a sign On the back of a cat
@lasredchris
@lasredchris 4 жыл бұрын
Object permance Grand challenge for ai - can we understand what is going on inside the 18 month old brain to sufficiently engineer this kind of intelligent cooperative behavior in a robot?
@polybrowser
@polybrowser 6 жыл бұрын
1:06:30 He states that we don't really know how to search program space. Doesn't the recent work by DeepMind on differentiable neural computers kind of show how you can turn this program space search into a parameter space search? I wonder why he didn't mention this approach.
@Mirgeee
@Mirgeee 6 жыл бұрын
1:16:36 If that's the case, why does Google invest into DeepMind (which is much longer term investment than 2 years)?
@lasredchris
@lasredchris 4 жыл бұрын
Memory system Probablistic programs The game engine is in your head Physics engine
@brentoster
@brentoster 3 жыл бұрын
Some other insights into neuroscience and AGI: kzbin.info/www/bejne/Z5CwlKNjjs-Do7M
@fire17102
@fire17102 Жыл бұрын
Ohhh boy is ai making a "comeback" you guys in 2017 have no idea
@muhammedcagrkartal9954
@muhammedcagrkartal9954 2 жыл бұрын
how does blind people start to get their awarness ? which part or how do we really start this intellectual journey when we are baby ?
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Рет қаралды 3,1 МЛН
ФОКУС С ЧИПСАМИ (секрет)
00:44
Masomka
Рет қаралды 3,8 МЛН
How Deep Neural Networks Work - Full Course for Beginners
3:50:57
freeCodeCamp.org
Рет қаралды 3,3 МЛН
Something Strange Happens When You Follow Einstein's Math
37:03
Veritasium
Рет қаралды 6 МЛН
These Are The Avalanches To Worry About
25:04
Veritasium
Рет қаралды 405 М.
Using AI to Accelerate Scientific Discovery
1:32:31
Institute for Ethics in AI Oxford
Рет қаралды 131 М.
Learn HTML5 and CSS3 For Beginners - Crash Course
3:54:03
developedbyed
Рет қаралды 2,3 МЛН
How to Get a Developer Job - Even in This Economy [Full Course]
3:59:46
freeCodeCamp.org
Рет қаралды 1,6 МЛН
Vortex Cannon vs Drone
20:44
Mark Rober
Рет қаралды 12 МЛН
Samsung mobile phone waterproof display. samsung mobile phone digital s23ultra  #shorts
0:15
Эволюция телефонов!
0:30
ТРЕНДИ ШОРТС
Рет қаралды 1,6 МЛН