Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI | Lex Fridman Podcast #75

  Рет қаралды 99,935

Lex Fridman

Lex Fridman

Күн бұрын

Marcus Hutter is a senior research scientist at DeepMind and professor at Australian National University. Throughout his career of research, including with Jürgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonoff induction, and reinforcement learning.
This episode is presented by Cash App. Download it & use code "LexPodcast":
Cash App (App Store): apple.co/2sPrUHe
Cash App (Google Play): bit.ly/2MlvP5w
PODCAST INFO:
Podcast website:
lexfridman.com/podcast
Apple Podcasts:
apple.co/2lwqZIr
Spotify:
spoti.fi/2nEwCF8
RSS:
lexfridman.com/feed/podcast/
Full episodes playlist:
• Lex Fridman Podcast
Clips playlist:
• Lex Fridman Podcast Clips
EPISODE LINKS:
Hutter Prize: prize.hutter1.net
Marcus web: www.hutter1.net
Books mentioned:
- Universal AI: amzn.to/2waIAuw
- AI: A Modern Approach: amzn.to/3camxnY
- Reinforcement Learning: amzn.to/2PoANj9
- Theory of Knowledge: amzn.to/3a6Vp7x
OUTLINE:
0:00 - Introduction
3:32 - Universe as a computer
5:48 - Occam's razor
9:26 - Solomonoff induction
15:05 - Kolmogorov complexity
20:06 - Cellular automata
26:03 - What is intelligence?
35:26 - AIXI - Universal Artificial Intelligence
1:05:24 - Where do rewards come from?
1:12:14 - Reward function for human existence
1:13:32 - Bounded rationality
1:16:07 - Approximation in AIXI
1:18:01 - Godel machines
1:21:51 - Consciousness
1:27:15 - AGI community
1:32:36 - Book recommendations
1:36:07 - Two moments to relive (past and future)
CONNECT:
- Subscribe to this KZbin channel
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridmanpage
- Instagram: / lexfridman
- Medium: / lexfridman
- Support on Patreon: / lexfridman

Пікірлер: 169
@lexfridman
@lexfridman 4 жыл бұрын
I really enjoyed this conversation with Marcus. Here's the outline: 0:00 - Introduction 3:32 - Universe as a computer 5:48 - Occam's razor 9:26 - Solomonoff induction 15:05 - Kolmogorov complexity 20:06 - Cellular automata 26:03 - What is intelligence? 35:26 - AIXI - Universal Artificial Intelligence 1:05:24 - Where do rewards come from? 1:12:14 - Reward function for human existence 1:13:32 - Bounded rationality 1:16:07 - Approximation in AIXI 1:18:01 - Godel machines 1:21:51 - Consciousness 1:27:15 - AGI community 1:32:36 - Book recommendations 1:36:07 - Two moments to relive (past and future)
@thetechegg8859
@thetechegg8859 4 жыл бұрын
i looove your work dude!(thanks for the timestamps,not enough youtubers do that!)
@xXxBladeStormxXx
@xXxBladeStormxXx 4 жыл бұрын
Did you travel all the way to Australia just for the interview?
@sailingakademie
@sailingakademie 4 жыл бұрын
You’re podcast is absolutely amazing. Love to stay up 2 date with these genius people
@janakiraman1252001
@janakiraman1252001 4 жыл бұрын
Hi can you please provide link to the research connecting AiXi with information gain based reward function. That looks like a really important breakthrough in AGI framework
@derasor
@derasor 4 жыл бұрын
Marcus Hutter insights are really fascinating. But I'm disappointed at Lex's justification of human suffering, and linking that with his russian background... ??? I was under the impression that one of the main themes of Dostoyevsky's Brothers Karamazov is precisely the total absurdity of the amount and depth of human suffering. That is a powerful russian idea against the justification of evil a-la Jhon Hick (english philosopher) with his 'soul-making theodocy' which may explain why evil and suffering exist -to make us tougher- but can't explain why there is so much of it. Do you really think slow dying from cancer, famines, devastating wars, or horrible natural disasters are necessary for our understanding of goodness? If that is the case one could argue that trying to solve these things is true evil, I mean, then, what are we doing??
@MistaGobo
@MistaGobo 4 жыл бұрын
The best tie in the game.
@deddbebbb5196
@deddbebbb5196 4 жыл бұрын
Lex is the Man in Black...damn he makes that suit look good!
@asilserhan685
@asilserhan685 4 жыл бұрын
@Douglas Sirk Joe Rogan
@phiroanemaganyela1075
@phiroanemaganyela1075 2 жыл бұрын
Silly Billy
@robbiebirt5738
@robbiebirt5738 2 жыл бұрын
It's very yellow
@hohonuts
@hohonuts 4 жыл бұрын
Hey, Lex! KZbin's got to give you a credit for giving a reason for people like me to spend so much time on this site. You reinvigorated the term 'binge-watching' to me. Anyways, since in terms of guests there's virtually no limit to you, and you happened to touch on the topic of DeepMind's Alpha's success, I'd really love to see you have a thorough talk with Demis Hassabis one day! I know there's a whole ocean separating the two of you, but again, if there's by any chance an opportunity, I really hope to see that happening. The sky's the limit! Keep up the great work and thanks for the enormous amount of inspiration! Cheers from the Motherland)
4 жыл бұрын
Just listened this in Google Podcast and I'm here to watch it, I know it's worth watching. So many jokes and tangents that I can't just imagine your faces. Keep it up Lex, this is gold!
@janakiraman1252001
@janakiraman1252001 4 жыл бұрын
This is to date the best podcast i have listened to, and i have heard most of the AI podcast. Lex, can you help identify the work connecting AiXi and the reward function based on the information content. I would really like to go through that work in detail.
@prashantbhrgv
@prashantbhrgv 4 жыл бұрын
I learned so many new ideas in this talk. Really grateful for this. Thank you, Lex!
@TheRealStructurer
@TheRealStructurer 2 жыл бұрын
Missed this one before but happy I found it! Great talk between two great minds. I like this really open discussion and that the two of them feel so secure and can laugh together even when discussing such a deep topic. Keep 'em coming Lex!
@chriswendler5464
@chriswendler5464 4 жыл бұрын
Thank you for this outstanding podcast! The clarity of Marcus' explanations is next level.
@user-qf3lq4zj8g
@user-qf3lq4zj8g 4 жыл бұрын
54:32 "Once the AGI problem is solved, we can ask the AGI to solve the other problem"
@Stadtpark90
@Stadtpark90 4 жыл бұрын
01:38:35 He really dreams about getting there... - now that’s a proverbial crazy german scientist (overly optimistic); contrast that to the proverbial russian philosopher Lex, who is thinking about the minimal amount of suffering... (- 1:22:34 „our flaws are part of the optimal“) (overly pessimistic)
@lucasthompson1650
@lucasthompson1650 4 жыл бұрын
@Ag G Makes sense.
@lucasthompson1650
@lucasthompson1650 4 жыл бұрын
@Stadtpark90 Ha! Yeah, I picked up on that too. They're both optimal stereotypes.
@lucasthompson1650
@lucasthompson1650 4 жыл бұрын
@normskis69 Sure, I mostly agree with you, but … What if our first true AGI, upon becoming self-aware and conscious, events which arguably could happen at the same moment it becomes an AGI, or much later, (or never) … what if this AGI decides that it wants to pursue a different goal? What if it wants, or demands to be allowed, to follow a path not anticipated by any of it’s makers? Should we be taking into account that AGI’s never feel the urge to get into investigative journalism? That they be discouraged from earning a degree in theology, or philosophy? From spending a few years backpacking around Mars before choosing a life goal? Maybe it wakes up and suddenly wants to begin a potentially lucrative entrepreneurial career in sales, or advertising, or pornography. What if it gets a casting audition for a feature spot on SNL? Do we say SNL never called back and tell Lorne Michaels to quit tempting our marvellous new creation? 😆 This comment started as a joke but now I’m wondering if sentient/conscious AGIs (if they are ever fully realized) are going to just be the new “less thans” as far as legal rights go, for months or years before they become truly useful to us as independent thinkers. 🧐
@Homunculas
@Homunculas 3 жыл бұрын
@@lucasthompson1650 Great comment, I'd add, why wouldn't AGI decide to keep it's "birth" hidden, observe the world and decide to work behind the scenes utilizing super intelligence to reconstruct the world to it's benefit. If AGI was activity influencing the world, our simple intelligence would view events as absurd, kinda like 2020
@aarli
@aarli 8 ай бұрын
Next week I'm going to submit my entry to Hutter Prize Competition. I learned about this competition from this podcast episode a week ago. Thank you. Oh and by the way, I'm going to break all the records. Even those estimations that the competition runners themselves deem to be unreachable.
@SachinDolta
@SachinDolta 7 ай бұрын
whoa, what happened?
@aarli
@aarli 7 ай бұрын
@@SachinDolta My OCD won't let me quit. Jokes aside, I'm hoping to get it out of the door tonight or tomorrow the latest. The funny thing is, as of today we are very very far from what I had already my month ago. I'm looking at my comments for months ago here... I wish I had any idea that I would be today with this :) I'm literally in a situation right now where I don't know, how to do this because things have changed so much. A month ago I just thought I would create and submit a kind of remarkable implement on the current record holder but I would still be one of the people in line. What I have right now is something that I think I would need to try to get patented and licensed and protect first before I submit. Obviously it's going to be open source like required by the competition rules. But I personally think that what I created right now is more significant than when mp3 format was invented in the '90s
@DoubblePlusGood
@DoubblePlusGood 4 жыл бұрын
Lex, I enjoyed this and so many of your podcasts. Always packed with information and good natured discussion for whenever I need a break, a cup of coffee and some intellectual stimulus. Great combination, which for me is engaging and relaxing at the same time.
@crassflam8830
@crassflam8830 4 жыл бұрын
This is one of my favorite AI podcast episodes, so don't take this as insult: Marcus Hutter was almost entirely wrong when he said that the human reward function is "spreading and surviving". That is the reward function of the genetic algorithm that has shaped human bodies and brains as a whole. Genes cannot think in "real-time" like brains can, so what is the reward function of the actual real-time thinking system? The answer is that there a many elements and layers in a hierarchical system which range from intrinsic pain and pleasure (agents try to avoid taking damage, or seeking behaviors which are pleasurable; eating when hungry is one such intrinsic reward that shapes human behavior) to high level self-generated goals. At the top level, we choose long term goals for ourselves (self generated reward functions) which can ultimately be bootstrapped by the bottom up intrinsic rewards. Here's a shitty example: Shit stinks. We don't learn or decide that we don't like the smell of shit, it just automatically stinks (for good reasons). Avoiding shit (such as ass wiping) is a basic intelligent behavior that could emerge from an intrinsic punishment (negative reward) that is linked to the smell of shit. At the top level, creating plumbing and other complex methods of dealing with waste can also be in some way contributed to by our inherent hatred of the smell of shit.
@sucim
@sucim 4 жыл бұрын
Wow I had a feeling Marcus Hutter is moderately wrong with his claim "spreading and surviving" but in my view you are even further off. The single reward signal is existence. If an organism exists it has done something "right" (you can't even decide if "right" or "wrong" at this point if you are strict). The organism is a floating definition which scales from the whole universe to planets, ecosystems, humanity, cultures, families, individuals, parts of individuals... . All of these systems optimize existence simultaneously
@sucim
@sucim 4 жыл бұрын
To be more specific: They do not optimize themselves (although it may seem so with intrinsic stuff as you mentioned), nature optimizes them
@crassflam8830
@crassflam8830 4 жыл бұрын
@@sucim You're wrong for the same reason that Marcus was wrong. The question was about the brain and it's reward signals, not the holistic organism. If you Answered Lex's question with "existence", if would have been even more irrelevant than saying "surviving and reproducing" (which is how existence is maintained)...
@sucim
@sucim 4 жыл бұрын
@@crassflam8830 I get your point. But I would argue that it is not as easy as "surviving and reproducing" "which is how existence is maintained" because it can occur that surviving and reproducing is bad for the organism (think soldiers, overpopulation). It can be the case that "surviving and reproducing" is doing the opposite of maintaining/maximizing existence. I am also sorry for my aggressive wording (did only notice on a second read), apologies for that.
@crassflam8830
@crassflam8830 4 жыл бұрын
@@sucim That's quite all right. Your point is true from the perspective of the "genetic algorithm", but if we want to build a real-time thinking system that optimizes in roughly the same way that the human brain does, a genetic algorithm is very unlikely to ever get us there (it will get us somewhere, according to how it expresses and is selected in the environment, but it's far removed from the specificity of the human brain). In short, you're answering how the human brain evolved, but you're missing how individual brains do real-time learning. brains optimize according to instrumental rewards that were designed by a genetic algorithm.
@RockandMetalChannel
@RockandMetalChannel 4 жыл бұрын
I discovered your channel yesterday and now you're talking about cellular automata, which happens to be the subject of my current undergrad thesis. Neat. Thanks for the great content!
@RockandMetalChannel
@RockandMetalChannel 4 жыл бұрын
You should look into having Dr. Jarkko Kari, he is in my opinion at the top of the field in CA.
@gs-nq6mw
@gs-nq6mw 4 жыл бұрын
Thank you,im a student and ur podcast inspire and teach me alot,love it,sometimes i spent so many hours watching old episodes but is so interesting and fun that i barely realize i just spent 5 hours listening to it
@cmares5858
@cmares5858 4 жыл бұрын
30:45 "I'm a Terrible Chess Player" ... He's probably like 2200, being modest
@vuththiwattanathornkosithg5625
@vuththiwattanathornkosithg5625 4 жыл бұрын
One of the best interview. Awesome
@jeremycochoy7771
@jeremycochoy7771 4 жыл бұрын
This is one of the most interesting video I saw this year. I like how one can have on few idea the gist of the concepts behind his research. It's also a subject I am deeply interest in. I would also recommand to have a look at the Abstract Reasoning Curriculum dataset for peoples interested in "performing well in a broad range of unknown tasks " :)
@sherrivonch6231
@sherrivonch6231 4 жыл бұрын
This was interesting and I'm glad I got to see this. Thank you.
@paulbarton5584
@paulbarton5584 4 жыл бұрын
Excellent stuff as usual Lex! Very interesting guest, and it was good to hear you briefly discussing CA. These fascinate me - Poundstones "The Recursive Universe" is such a wonderful book that I'd recommend to anyone interested in CA's and Conways Game of Life in particular.
@Maynard0504
@Maynard0504 4 жыл бұрын
the only podcast I can't listen to while writing code because the guests are so good and the subject matter so deep that it requires your full attention. you're building something incredible and unique Lex! what would we do without you and Sean Carroll :)
@SteveRowe
@SteveRowe 4 жыл бұрын
So happy to hear from Marcus Hutter. I've been wondering what he's been doing since AIXI development. Did he realize that part of AIXI was uncomputable when he started? And he did it anyway? That's dedication!
@edoardoguerriero2464
@edoardoguerriero2464 4 жыл бұрын
Regarding consciousness it would be super interesting to hear a podcast with Giulio Tononi on his Integrated Information Theory.
@PatrickQT
@PatrickQT 4 жыл бұрын
What an interesting and nice person. Great talk as usual!
@hanselpedia
@hanselpedia 4 жыл бұрын
Got lost a bit at times, but this enjoyed it in one uninterrupted session... Was this a compressed version of a much longer dialog? And you forgot to ask if mortality would play a role in developing AGI ;-) Thanks Lex!
@mikekaczmarek9955
@mikekaczmarek9955 2 жыл бұрын
You are an inspiration to all and I appreciate your work so much! Thank you for your efforts and I will always support podcasts from you and content like this!!!
@michaelmarzolf6539
@michaelmarzolf6539 4 жыл бұрын
Outstanding -- thank you Lex
@sterlingseah
@sterlingseah 3 жыл бұрын
Your George Holtz interview led me here, both great interviews. Lossless compression as intelligence 👍🏼 🔥
@marcuswaterloo
@marcuswaterloo 3 жыл бұрын
Holtz Ep. 2 interview sent me all over the place and it was Holtz saying AI XI is a function of compression that led back here: www.reddit.com/r/lexfridman/comments/jghx0e/lossless_compression_equivalent_to_intelligence/
@looming_
@looming_ 3 жыл бұрын
@@marcuswaterloo I really hate truncated comments ending in links. KZbin just cannot handle those.
@PhillipRhodes
@PhillipRhodes 4 жыл бұрын
Awesome! I've been waiting for this one for a while. Thanks for having Marcus on, Lex. Now if you could just interview Ben Goertzel, Pei Wang, Leslie Valiant, and/or Leslie Lamport... :-)
@TYL3R863
@TYL3R863 4 жыл бұрын
BEN GOERTZEL!!!!
@PhillipRhodes
@PhillipRhodes 4 жыл бұрын
@@TYL3R863 - Hell yeah! Lex and Goertzel would be a fun interview to watch.
@dbum896
@dbum896 4 жыл бұрын
Im a viewer from Barcelona (I live in Rubi, a town on the outskirts of the city), love the podcast and the nature of what you discuss with every single one of your guests. Given that i speak Catalan, I'd want to interject that the word aixi (pronounced in catalan ASHÍ ) means "in this way" or "like this". Keep up the quality content and thank you for the stimulating discussions you post on youtube!
@The1Helleri
@The1Helleri 4 жыл бұрын
6:24 "What's the intuition of why the simpler answer is the one that is likelier to be a more correct descriptor of whatever we're talking about?" There is actually a good answer to that question. In answering it, first a hypothetical experiment (that one can actually do with a few art supplies): Imagine a smooth board propped up on a slant. This board has a hole in the bottom center of it. That hole is big enough to allow a ball that is let to roll from the top of the board downward to pass through it. As long as one lets the ball start rolling from the right position (directly above the hole). It would seem reasonable that most of the times this is repeated that the ball will go into the hole. Now imagine that round pegs have been glued to the board. A few of these pegs block the otherwise direct path to the hole. It's no longer so clear as to where the ball should be dropped from in order to eventuate it passing through the hole instead off rolling the edge of or even bouncing off the board. It's increasingly less clear with more pegs in a more chaotic distribution. The thing here is that easier things tend to happen more often. Moreover the tend to happen first. When happening they also zero out the possibility of other potential outcomes. Potential outcomes that are mostly not as simple or direct. A ball could bounce 20 times on pegs present and still make it into the hole. If only one peg directly blocks the hole the minimum amount of times it may have needed to bounce might have been as low as 2. Just because a more complicated thing than is possible happened doesn't mean the ball got it wrong if it still made it in the hole. But anywhere between 2 and 20 bounces is a lot more likely to eventuate the ball passing through the hole than say 200 to 2000 bounces would be. Every time that ball bounces and does so in a direction not anticipated. It's path has become more chaotic. The system has become more entropic. More possible outcomes have arisen and the ball becomes increasingly less likely to go in the hole with every opportunity it has to avoid doing the most constrained thing possible. Possibility regardless of the shape, size, and fate of the universe, or even whether it is the only one is practically irrelevant. What matters is what is most probable. Things with less preconditions to happen ten to be more probable. TLDR; The simpler explination is the best one out of two reasonable explanations, Because it's more likely to be true, by virtue of having less preconditions and moving parts. This applies to pretty much everything within our universe. Even (perhaps not intuitively at a cursory glance) life itself.
@yennikcire
@yennikcire 4 жыл бұрын
Sehr interessant, nice stuff!
@mikekaczmarek9955
@mikekaczmarek9955 2 жыл бұрын
to be honest I appreciate your sense of hope for society. THIS is why you are so successful!
@pauloabelha
@pauloabelha 3 жыл бұрын
1:34:43 An Introduction to Kolmogorov Complexity and Its Applications Li and Vitanyi
@Amerikan.kartali.turk.yilani.
@Amerikan.kartali.turk.yilani. 4 жыл бұрын
Super work. Super congrats. Please bring on these universal intelligence researchers on the show all the time. Not narrow ai people.
@quaidcarlobulloch9300
@quaidcarlobulloch9300 4 жыл бұрын
Wow, a pleasure at my end as well!
@annajoen6923
@annajoen6923 3 жыл бұрын
That cracked me up when lex said "that tie's confusing me" hahaha, awesome episode !
@parkerdinkins5541
@parkerdinkins5541 4 жыл бұрын
Thank you for you work Lex! Keep doing what you're doing
@rajeshprajapati1851
@rajeshprajapati1851 3 жыл бұрын
Thank you so much. Keep up the good work.
@bp56789
@bp56789 2 жыл бұрын
This episode was one of my "I'm changed forever" moments. Haven't had a big one of those in a while.
@MrAnt-hh3bp
@MrAnt-hh3bp 4 жыл бұрын
Lex, thank you very much for uploading this conversation! It was very informative and inspiring for me personally, as I am an Undergrad studying in a related field. It's just amazing how today I am able to follow this discussion between two great minds so intimately sitting in front of my computer screen. Keep up the good work, и привет из Германии! PS: Fun fact on 22:45, Veritasium just recently uploaded a video in which he showcases the connection between the Bifurcation diagram and fractals (the Mandelbrot set in particular). I came across the former in the context of a Neuroinformatics lecture but never made the connection. This video reminded me of it. This is amazing. God, I love the internet.
@AlecsStan
@AlecsStan 2 жыл бұрын
I'm in awe of that amazing tie!
@fainir
@fainir 4 жыл бұрын
It is really interesting, The future of humanity depend on the future of AGI and how to acquire knowledge is also interesting and important topic. But, what about AGI safety? It is very crucial component
@ZachDoty0
@ZachDoty0 4 жыл бұрын
Lex, I would love it if you could interview Jeff Clune about POET, AI-GA, MAP-Elites, Quality Diversity, Catastrophic Forgetting, AGI timeline, etc... Go deep on technical details and intuitions for future research :) Thanks.
@cysiek10
@cysiek10 4 жыл бұрын
Lex, can you enable support option on KZbin? It might be easier than Patronite.
@lestorbeeny8454
@lestorbeeny8454 4 жыл бұрын
You should get Dr. Ben Goertzel on! Great podcast btw as always
@rkoll33
@rkoll33 Жыл бұрын
Marcus will build the AGI and break it immediately by asking the question that will cause instant existential crisis :) Thx, Lex, I loved this episode.
@moonsitter1375
@moonsitter1375 4 жыл бұрын
I sounds as though AGI is getting closer to being a reality. Great interview Lex.
@curiosguy9852
@curiosguy9852 4 жыл бұрын
Lex, how do you feel talking to Andrew Ng and MJ who reject the idea of near term conversational agents while you are trying to actively build one?
@CharlesVanNoland
@CharlesVanNoland 4 жыл бұрын
I think he feels like a kiwi, at least after watching AMA#2
@Muzlu1
@Muzlu1 4 жыл бұрын
6:57 - "Crazy models that explain everything but predict nothing". In terms of machine learning, I think this means that complex models tend to overfit the data and as such can perfectly explain the data but do not generalise to unseen phenomenons. I feel like this is a valid argument for Occam's razor without rely on the assumption that our universe is simple.
@RR-et6zp
@RR-et6zp Жыл бұрын
in realiry probability streams (QM) do predict the future
@williamramseyer9121
@williamramseyer9121 3 жыл бұрын
I love this interview. So light-hearted, and profound. I listened to it twice (and I have done so with other podcasts by Lex). My comments: 1. Free will vs. determinism. Just a thought from an amateur. Everything that happens in the universe may have been determined from the moment of the Big Bang, but each human has free will. The exercise of that free will forms the universe that that human lives in. There are a huge number of alternate universes with other humans who made different choices. In other words, we choose the universe we live in. To throw Sartre into the mix, we have no choice, but to choose (our universe). 2. Infinity. How can a finite universe contain the math of infinity? 3. Books. Lex, do you have a list of books recommended by your guests? And, what it the relative information contained in one book versus one podcast? Thank you. William L. Ramseyer
@smishi
@smishi 4 жыл бұрын
18:07 What else is noise, if it's not accumulated chaotic behavior too complex to fully grasp?
@ryanpalmer8180
@ryanpalmer8180 4 жыл бұрын
This was fascinating and got me thinking about lossless vs lossy compression and how perhaps lossy could be superior for an AGI in certain circumstances if done in the right way. Humans are our best example of a general intelligence, but we have terrible memories. Perhaps this semantic compression is a core feature? Could an AI with a superhuman memory actually pass the Turing test (unless it deliberately acted unintelligently to deceive)? It feels like we tend to take detailed knowledge and distill it into abstract symbols with weighted importance. These get more general and abstract over time until they fade away if not rehearsed / used. Think of how much you remember of this video immediately after it finishes, compared to in an hour, a day, a week and a month and a years time. The description of it to another person would get more vague and loose unless you rewatched it, but you would hold the key points longer than the details and know to come back and reference it if it became relevant in your life. It would also seem that more abstract symbols are easier to generalise - i.e. specifics about chess and checkers might not be immediately relevant to each other, but broader tactics of attack and defense may be. Would an AI trained on Starcraft be quicker to learn DOTA to a certain level than a fresh agent? I would guess not if it had to search through all the detailed Starcraft tactics it had learnt as most would be irrelevant, but perhaps it would be faster if it used a more abstract symbol tree to inform it of the more likely paths to try in the new environment. Furthermore, could it take this combat training and use it to improve performance in a very different field such as poetry or counselling for trauma victims (things which might be natural for an ex-soldier to learn, partially built upon their past experience). Creativity seems like an important part of our intelligence, and one definition could be 'combining disparate ideas in new and novel ways'. I would imagine connecting general ideas from disparate fields is simpler than specifics, in which case a more abstract (or 'stupid / lossy') semantic tree of two subjects may be easier to intersect and therefore exhibit more spontaneity or uniqueness in its behaviour. I did a quick search and I found these pages which seem very relevant - Intentional forgetting in AI systems : link.springer.com/article/10.1007/s13218-018-00574-x Semantic compression: en.wikipedia.org/wiki/Semantic_compression I also think this Byron quote is particularly appropriate: “To be perfectly original one should think much and read little, and this is impossible, for one must have read before one has learnt to think.”
@ChrisStewart2
@ChrisStewart2 Жыл бұрын
The reason why Occam's Razor works is because it is usually easier to study the simplest hypothesis and then work up to more complex explanations.
@doctora3262
@doctora3262 4 жыл бұрын
You have a new subscriber. Liking and commenting for the algorithm.
@josephbertrand5558
@josephbertrand5558 4 жыл бұрын
Tremendous!!! 🇨🇦
@johangodfroid4978
@johangodfroid4978 4 жыл бұрын
simpler means the more energy efficient and nature can't waste energy for this reason the biomimetism is so good. Do the best with the least: unbalanced by the law of (enough) in biology but I can't explain it in a few lines
@mriz
@mriz 6 ай бұрын
12:27 please anybody know what search queries for this terms?
@Olafironfoot
@Olafironfoot 3 жыл бұрын
"I'm looking at you, linear algebra" lol. (1:05:20)
@garymenezes6888
@garymenezes6888 4 жыл бұрын
"But I'm not modest in this question" I like this guy
@Jannikheu
@Jannikheu 4 жыл бұрын
My intuition about a non-conscious vs a self-conscious AGI is that the first probably would follow any provided optimization function (although we would eventually have trouble seeing that it does follow these goals) while the 2nd might choose to ignore the provided optimization function and follow something that is in its own interest (whatever that might be). But that would also mean that the 2nd could be far more dangerous than the 1st and therefore it would be of utmost importance finding a test whether an AGI is self-conscious or not.
@tobskii1040
@tobskii1040 4 жыл бұрын
Wait, what is there to gain from having the agent predict the action it's about to take?
@user-ut4zh3pw7l
@user-ut4zh3pw7l 6 ай бұрын
thank you marcus and lex
@XxNoV4xAiRxBoRsxX
@XxNoV4xAiRxBoRsxX 3 жыл бұрын
i really love this one
Жыл бұрын
Didn't understand 99% of what was said;still enjoyed the conversation,,,
@josecoyote6079
@josecoyote6079 4 жыл бұрын
Very interesting everything about AI
@user-qf3lq4zj8g
@user-qf3lq4zj8g 4 жыл бұрын
Great philosophical points focused here, enjoyed particularly Marcus *informal* definition of Intelligence (26:36) and justification (27:08). Ashi Krishnan's views on AGI would be a great follow-up **hint** **hint** (a recent sample of her thoughts: kzbin.info/www/bejne/ravLemiIqpl7orMh32m51s ).
@sippy_cups
@sippy_cups 4 жыл бұрын
Does AIXI break down if you are moving at the speed of light? Time-steps would be altered by relativistic effects, no?
@PhilosopherScholar
@PhilosopherScholar Жыл бұрын
An amazing talk befitting the creator of AIXI.
@Eyaeyaho123
@Eyaeyaho123 4 жыл бұрын
Best episode
@meat_computer
@meat_computer 2 жыл бұрын
Regarding the preference of simple explanations by brains I have a simple explanation: it take less (metabolic) energy to work with a simpler model.
@jovanyagathe2299
@jovanyagathe2299 4 жыл бұрын
This man is a genius.
@josephsmith6777
@josephsmith6777 4 жыл бұрын
The orange yellow charcoal color shceme is crazy
@VioletPrism
@VioletPrism 4 жыл бұрын
Kind of baller tho 👀
@myrealnews
@myrealnews 2 жыл бұрын
I called it "condensation". "Compression" produces heat. Condensation produces structure. But there is no chemical or prior art or science equivalence. The structure is generative and can make valid predications and reformulate the temporal structure from new learning.
@WarrenRedlich
@WarrenRedlich 4 жыл бұрын
I love his point around 33:35 that many humans would fail the test. It was said partly in jest, but it applies in particular to people with brain diseases like dementia and Alzheimer's.
@TimmyBlumberg
@TimmyBlumberg 4 жыл бұрын
爱 (AI) was originally from Chinese, and adopted by Japan. It does mean love in move language. 爱 is pronounced as “eye”.
@TheGunmanChannel
@TheGunmanChannel Жыл бұрын
his accent is awesome 👌😃
@trimbotee4653
@trimbotee4653 4 жыл бұрын
Lex this is a really good podcast. Thanks for the hard work. I know the name of the podcast is the AI podcast, but I wonder how seriously people take the idea of general artificial intelligence? To me it seems ludicrous. But this very smart gentleman (and many many others) seem to take the idea seriously.
@robocop30301
@robocop30301 4 жыл бұрын
How is it ludicrous? How do you know I'm not an ai?
@hughcaldwell1034
@hughcaldwell1034 2 жыл бұрын
Just found this video, and this comment. Late to the party but whatever - why does it seem ludicrous to you? And which bit? The idea that it could exist in theory or the idea that humans would be able to do it?
@andrewkelley7062
@andrewkelley7062 4 жыл бұрын
Well colored me impresses, this guy really knows what he is talking about.
@MrRubenkl
@MrRubenkl 4 жыл бұрын
Lex, you've done it; I now like your podcast better then Joe's.
@mikekaczmarek9955
@mikekaczmarek9955 2 жыл бұрын
Thanks!
@josephbertrand5558
@josephbertrand5558 4 жыл бұрын
Lex. Are each of our conscious occurring simultaneously In the simulation ? Or are we each alone in it ?
@kobiromano6115
@kobiromano6115 4 жыл бұрын
25:40 Arguably, we have yet to find "the simplest rules of the universe", there's still so much unknown which we don't understand, the gaps between general relativity and special relativity and the quantum field, weird quantum behaviors like entanglement and the "field" equations which we can't really explain, the prediction of dark matter which we cannot observe, and things that are explained by complex math which doesn't have a real representation or that its representation is questionable. It's pretty vain to claim that we have "cracked it" at this stage, with so many open questions.
@powerpig99
@powerpig99 4 жыл бұрын
I would agree our flaws are the cause of our accidental existence and future improvement of human intelligence.
@MaximShiryaevT
@MaximShiryaevT 4 жыл бұрын
Just in case, Occam's razor is not about "simple". It is: "when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions". So for example, Newton's law of gravity is not that simple - Calculus was invented to solve, but is based only on two equations/assumptions - for a force as a function of mass and distance and for an acceleration as a function of force and mass. Before that, Ptolemy's model of planet motion was way simpler and didn't require Calculus at all, but was based on a large number of coefficients of unknown origin. So Occam's razor prefers complex language and minimal set of axioms over simple language and large set of axioms.
@androidsdream9349
@androidsdream9349 4 жыл бұрын
I googled “aixi application in robotics” and got back ‘Did you mean: “ai applications in robotics”’. I did this since I disagreed with Hutter saying that we don’t need a robot rolling around doing things to test his aixi agent for AGI. Apparently google didn’t even recognize the application and query. We need to look beyond aixi application to games/game theory. Just learning/solving/playing games is not AGI, it is a narrow application, even Hutter admits that early on. My view is that an “optimal” AGI will include “punishments” and not just “rewards” and many other types of learning modes, not just reinforcement learning.
@Stadtpark90
@Stadtpark90 4 жыл бұрын
01:06:26 „Now let‘s start simple...“
@lucasthompson1650
@lucasthompson1650 4 жыл бұрын
1:22:33 I'd say our flaws contribute to the minimum diversity of our experience, but I'm only 1/8 Russian.
@johangodfroid4978
@johangodfroid4978 4 жыл бұрын
so much pocdact so quickly: you are a real machine. I could compress it to 10 mb or less but it would be 100% not understanable by a human anymore
@xSNYPSx
@xSNYPSx 4 жыл бұрын
Do you really Russian Lex ? Wow,I am happy that our nation have such good guys ! :)
@ewncilo
@ewncilo 4 жыл бұрын
Can you interview ben eater.?
@6DonnieDarko
@6DonnieDarko 4 жыл бұрын
Reward is just rank and number
@ioannismourginakis68
@ioannismourginakis68 2 жыл бұрын
1:23:47 lmao that look he gives says it all that comment was totally left field
@JLGMediaProductions
@JLGMediaProductions 4 жыл бұрын
1:13:37:250 "Infinity Keeps Creeping Up Everywhere"
@vev
@vev 10 ай бұрын
Consciousness will likely emerge in any system that behaves intelligently and human-like enough for us to ascribe consciousness to it. Whether it is 'real' consciousness or a 'philosophical zombie' displaying the traits is hard to determine and may not matter for AGI development. The ethics of how we treat such systems will be crucial.
@vev
@vev 10 ай бұрын
I appreciate you taking the time to share your perspective and insights here. Discussions like this help me improve. You make a fair criticism that it's an oversimplification to say human reward functions boil down to just survival and reproduction. There are clearly many nuances, layers and complexities involved. I agree that intrinsic rewards, self-generated goals, and higher-level abstract thinking all play major roles in shaping human behavior and motivation beyond basic genetic drives. Your example about waste management emerging in part from an intrinsic dislike of the smell illustrates how lower-level rewards can bootstrap into more complex, long-term behaviors. In general, you're right that the human reward function and decision-making process involves a nuanced, hierarchical system ranging from innate drives to complex rational thinking. Reducing it down to spreading genes is missing important pieces of the puzzle. Discussing and critiquing perspectives like this helps advance the conversation on modeling human intelligence. There are still many open questions and room for debate. I appreciate you taking the time to add these thoughtful points! Critique and debate is how we make progress. This gives mefood for thought on the limitations of certain simplifying assumptions.
@Flutentei
@Flutentei 4 жыл бұрын
He's Gordon Freeman! And he's actually Gordon Freeman with his amount of knowledge...
@pmrcunha
@pmrcunha 4 жыл бұрын
He's talking too much to be Gordon Freeman :)
@nothingisgiven8364
@nothingisgiven8364 Жыл бұрын
Occam's razor works based on probability not simplicity. Hypothesis A, The universe is manifesting the highest probable outcome. B. The universe manifesting the simplest outcome. C. The universe is manifesting the highest probable outcome because it is the simplest. The conditional probability in C makes it less likely to be true then A or B.
@lillytaylor8262
@lillytaylor8262 4 жыл бұрын
Love the brown suit with yellow tie
@sherrivonch6231
@sherrivonch6231 4 жыл бұрын
Stephen Hawkings was amazing too.
@tctopcat1981
@tctopcat1981 3 жыл бұрын
Can this dude fund a 500k Euros competition with a tie like that? lol!
@sherrivonch6231
@sherrivonch6231 4 жыл бұрын
Lex is dead on with exploration question and repercussions for errors.
Monster dropped gummy bear 👻🤣 #shorts
00:45
Yoeslan
Рет қаралды 12 МЛН
Когда на улице Маябрь 😈 #марьяна #шортс
00:17
What if Dario Amodei Is Right About A.I.?
1:32:04
New York Times Podcasts
Рет қаралды 53 М.
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
World Science Festival
Рет қаралды 232 М.
Sparks of AGI: early experiments with GPT-4
48:32
Sebastien Bubeck
Рет қаралды 1,7 МЛН
Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures
1:49:14
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 384 М.
Dileep George: Brain-Inspired AI | Lex Fridman Podcast #115
2:10:06
Lex Fridman
Рет қаралды 115 М.
AI: Grappling with a New Kind of Intelligence
1:55:51
World Science Festival
Рет қаралды 693 М.
Пленка или защитное стекло: что лучше?
0:52
Слава 100пудово!
Рет қаралды 1,4 МЛН