I really enjoyed this conversation with Marcus. Here's the outline: 0:00 - Introduction 3:32 - Universe as a computer 5:48 - Occam's razor 9:26 - Solomonoff induction 15:05 - Kolmogorov complexity 20:06 - Cellular automata 26:03 - What is intelligence? 35:26 - AIXI - Universal Artificial Intelligence 1:05:24 - Where do rewards come from? 1:12:14 - Reward function for human existence 1:13:32 - Bounded rationality 1:16:07 - Approximation in AIXI 1:18:01 - Godel machines 1:21:51 - Consciousness 1:27:15 - AGI community 1:32:36 - Book recommendations 1:36:07 - Two moments to relive (past and future)
@thetechegg88594 жыл бұрын
i looove your work dude!(thanks for the timestamps,not enough youtubers do that!)
@xXxBladeStormxXx4 жыл бұрын
Did you travel all the way to Australia just for the interview?
@sailingakademie4 жыл бұрын
You’re podcast is absolutely amazing. Love to stay up 2 date with these genius people
@janakiraman12520014 жыл бұрын
Hi can you please provide link to the research connecting AiXi with information gain based reward function. That looks like a really important breakthrough in AGI framework
@derasor4 жыл бұрын
Marcus Hutter insights are really fascinating. But I'm disappointed at Lex's justification of human suffering, and linking that with his russian background... ??? I was under the impression that one of the main themes of Dostoyevsky's Brothers Karamazov is precisely the total absurdity of the amount and depth of human suffering. That is a powerful russian idea against the justification of evil a-la Jhon Hick (english philosopher) with his 'soul-making theodocy' which may explain why evil and suffering exist -to make us tougher- but can't explain why there is so much of it. Do you really think slow dying from cancer, famines, devastating wars, or horrible natural disasters are necessary for our understanding of goodness? If that is the case one could argue that trying to solve these things is true evil, I mean, then, what are we doing??
@MistaGobo4 жыл бұрын
The best tie in the game.
@deddbebbb51964 жыл бұрын
Lex is the Man in Black...damn he makes that suit look good!
@asilserhan6854 жыл бұрын
@Douglas Sirk Joe Rogan
@phiroanemaganyela10753 жыл бұрын
Silly Billy
@robbiebirt57382 жыл бұрын
It's very yellow
@hohonuts4 жыл бұрын
Hey, Lex! KZbin's got to give you a credit for giving a reason for people like me to spend so much time on this site. You reinvigorated the term 'binge-watching' to me. Anyways, since in terms of guests there's virtually no limit to you, and you happened to touch on the topic of DeepMind's Alpha's success, I'd really love to see you have a thorough talk with Demis Hassabis one day! I know there's a whole ocean separating the two of you, but again, if there's by any chance an opportunity, I really hope to see that happening. The sky's the limit! Keep up the great work and thanks for the enormous amount of inspiration! Cheers from the Motherland)
4 жыл бұрын
Just listened this in Google Podcast and I'm here to watch it, I know it's worth watching. So many jokes and tangents that I can't just imagine your faces. Keep it up Lex, this is gold!
@pauloabelha4 жыл бұрын
1:34:43 An Introduction to Kolmogorov Complexity and Its Applications Li and Vitanyi
@crassflam88304 жыл бұрын
This is one of my favorite AI podcast episodes, so don't take this as insult: Marcus Hutter was almost entirely wrong when he said that the human reward function is "spreading and surviving". That is the reward function of the genetic algorithm that has shaped human bodies and brains as a whole. Genes cannot think in "real-time" like brains can, so what is the reward function of the actual real-time thinking system? The answer is that there a many elements and layers in a hierarchical system which range from intrinsic pain and pleasure (agents try to avoid taking damage, or seeking behaviors which are pleasurable; eating when hungry is one such intrinsic reward that shapes human behavior) to high level self-generated goals. At the top level, we choose long term goals for ourselves (self generated reward functions) which can ultimately be bootstrapped by the bottom up intrinsic rewards. Here's a shitty example: Shit stinks. We don't learn or decide that we don't like the smell of shit, it just automatically stinks (for good reasons). Avoiding shit (such as ass wiping) is a basic intelligent behavior that could emerge from an intrinsic punishment (negative reward) that is linked to the smell of shit. At the top level, creating plumbing and other complex methods of dealing with waste can also be in some way contributed to by our inherent hatred of the smell of shit.
@sucim4 жыл бұрын
Wow I had a feeling Marcus Hutter is moderately wrong with his claim "spreading and surviving" but in my view you are even further off. The single reward signal is existence. If an organism exists it has done something "right" (you can't even decide if "right" or "wrong" at this point if you are strict). The organism is a floating definition which scales from the whole universe to planets, ecosystems, humanity, cultures, families, individuals, parts of individuals... . All of these systems optimize existence simultaneously
@sucim4 жыл бұрын
To be more specific: They do not optimize themselves (although it may seem so with intrinsic stuff as you mentioned), nature optimizes them
@crassflam88304 жыл бұрын
@@sucim You're wrong for the same reason that Marcus was wrong. The question was about the brain and it's reward signals, not the holistic organism. If you Answered Lex's question with "existence", if would have been even more irrelevant than saying "surviving and reproducing" (which is how existence is maintained)...
@sucim4 жыл бұрын
@@crassflam8830 I get your point. But I would argue that it is not as easy as "surviving and reproducing" "which is how existence is maintained" because it can occur that surviving and reproducing is bad for the organism (think soldiers, overpopulation). It can be the case that "surviving and reproducing" is doing the opposite of maintaining/maximizing existence. I am also sorry for my aggressive wording (did only notice on a second read), apologies for that.
@crassflam88304 жыл бұрын
@@sucim That's quite all right. Your point is true from the perspective of the "genetic algorithm", but if we want to build a real-time thinking system that optimizes in roughly the same way that the human brain does, a genetic algorithm is very unlikely to ever get us there (it will get us somewhere, according to how it expresses and is selected in the environment, but it's far removed from the specificity of the human brain). In short, you're answering how the human brain evolved, but you're missing how individual brains do real-time learning. brains optimize according to instrumental rewards that were designed by a genetic algorithm.
@The1Helleri4 жыл бұрын
6:24 "What's the intuition of why the simpler answer is the one that is likelier to be a more correct descriptor of whatever we're talking about?" There is actually a good answer to that question. In answering it, first a hypothetical experiment (that one can actually do with a few art supplies): Imagine a smooth board propped up on a slant. This board has a hole in the bottom center of it. That hole is big enough to allow a ball that is let to roll from the top of the board downward to pass through it. As long as one lets the ball start rolling from the right position (directly above the hole). It would seem reasonable that most of the times this is repeated that the ball will go into the hole. Now imagine that round pegs have been glued to the board. A few of these pegs block the otherwise direct path to the hole. It's no longer so clear as to where the ball should be dropped from in order to eventuate it passing through the hole instead off rolling the edge of or even bouncing off the board. It's increasingly less clear with more pegs in a more chaotic distribution. The thing here is that easier things tend to happen more often. Moreover the tend to happen first. When happening they also zero out the possibility of other potential outcomes. Potential outcomes that are mostly not as simple or direct. A ball could bounce 20 times on pegs present and still make it into the hole. If only one peg directly blocks the hole the minimum amount of times it may have needed to bounce might have been as low as 2. Just because a more complicated thing than is possible happened doesn't mean the ball got it wrong if it still made it in the hole. But anywhere between 2 and 20 bounces is a lot more likely to eventuate the ball passing through the hole than say 200 to 2000 bounces would be. Every time that ball bounces and does so in a direction not anticipated. It's path has become more chaotic. The system has become more entropic. More possible outcomes have arisen and the ball becomes increasingly less likely to go in the hole with every opportunity it has to avoid doing the most constrained thing possible. Possibility regardless of the shape, size, and fate of the universe, or even whether it is the only one is practically irrelevant. What matters is what is most probable. Things with less preconditions to happen ten to be more probable. TLDR; The simpler explination is the best one out of two reasonable explanations, Because it's more likely to be true, by virtue of having less preconditions and moving parts. This applies to pretty much everything within our universe. Even (perhaps not intuitively at a cursory glance) life itself.
@SteveRowe4 жыл бұрын
So happy to hear from Marcus Hutter. I've been wondering what he's been doing since AIXI development. Did he realize that part of AIXI was uncomputable when he started? And he did it anyway? That's dedication!
@PatrickQT4 жыл бұрын
What an interesting and nice person. Great talk as usual!
@janakiraman12520014 жыл бұрын
This is to date the best podcast i have listened to, and i have heard most of the AI podcast. Lex, can you help identify the work connecting AiXi and the reward function based on the information content. I would really like to go through that work in detail.
@user-qf3lq4zj8g4 жыл бұрын
54:32 "Once the AGI problem is solved, we can ask the AGI to solve the other problem"
@Stadtpark904 жыл бұрын
01:38:35 He really dreams about getting there... - now that’s a proverbial crazy german scientist (overly optimistic); contrast that to the proverbial russian philosopher Lex, who is thinking about the minimal amount of suffering... (- 1:22:34 „our flaws are part of the optimal“) (overly pessimistic)
@lucasthompson16504 жыл бұрын
@Ag G Makes sense.
@lucasthompson16504 жыл бұрын
@Stadtpark90 Ha! Yeah, I picked up on that too. They're both optimal stereotypes.
@lucasthompson16504 жыл бұрын
@normskis69 Sure, I mostly agree with you, but … What if our first true AGI, upon becoming self-aware and conscious, events which arguably could happen at the same moment it becomes an AGI, or much later, (or never) … what if this AGI decides that it wants to pursue a different goal? What if it wants, or demands to be allowed, to follow a path not anticipated by any of it’s makers? Should we be taking into account that AGI’s never feel the urge to get into investigative journalism? That they be discouraged from earning a degree in theology, or philosophy? From spending a few years backpacking around Mars before choosing a life goal? Maybe it wakes up and suddenly wants to begin a potentially lucrative entrepreneurial career in sales, or advertising, or pornography. What if it gets a casting audition for a feature spot on SNL? Do we say SNL never called back and tell Lorne Michaels to quit tempting our marvellous new creation? 😆 This comment started as a joke but now I’m wondering if sentient/conscious AGIs (if they are ever fully realized) are going to just be the new “less thans” as far as legal rights go, for months or years before they become truly useful to us as independent thinkers. 🧐
@Homunculas4 жыл бұрын
@@lucasthompson1650 Great comment, I'd add, why wouldn't AGI decide to keep it's "birth" hidden, observe the world and decide to work behind the scenes utilizing super intelligence to reconstruct the world to it's benefit. If AGI was activity influencing the world, our simple intelligence would view events as absurd, kinda like 2020
@mikekaczmarek99552 жыл бұрын
to be honest I appreciate your sense of hope for society. THIS is why you are so successful!
@prashantbhrgv4 жыл бұрын
I learned so many new ideas in this talk. Really grateful for this. Thank you, Lex!
@mikekaczmarek99552 жыл бұрын
Thanks!
@vuththiwattanathornkosithg56254 жыл бұрын
One of the best interview. Awesome
@Maynard05044 жыл бұрын
the only podcast I can't listen to while writing code because the guests are so good and the subject matter so deep that it requires your full attention. you're building something incredible and unique Lex! what would we do without you and Sean Carroll :)
@cmares58584 жыл бұрын
30:45 "I'm a Terrible Chess Player" ... He's probably like 2200, being modest
@TheRealStructurer3 жыл бұрын
Missed this one before but happy I found it! Great talk between two great minds. I like this really open discussion and that the two of them feel so secure and can laugh together even when discussing such a deep topic. Keep 'em coming Lex!
@Trebleclefaudio4 жыл бұрын
Lex, I enjoyed this and so many of your podcasts. Always packed with information and good natured discussion for whenever I need a break, a cup of coffee and some intellectual stimulus. Great combination, which for me is engaging and relaxing at the same time.
@edoardoguerriero24644 жыл бұрын
Regarding consciousness it would be super interesting to hear a podcast with Giulio Tononi on his Integrated Information Theory.
@Amerikan.kartali.turk.yilani.4 жыл бұрын
Super work. Super congrats. Please bring on these universal intelligence researchers on the show all the time. Not narrow ai people.
@AlecsStan2 жыл бұрын
I'm in awe of that amazing tie!
@chriswendler54644 жыл бұрын
Thank you for this outstanding podcast! The clarity of Marcus' explanations is next level.
@WarrenRedlich4 жыл бұрын
I love his point around 33:35 that many humans would fail the test. It was said partly in jest, but it applies in particular to people with brain diseases like dementia and Alzheimer's.
@Muzlu14 жыл бұрын
6:57 - "Crazy models that explain everything but predict nothing". In terms of machine learning, I think this means that complex models tend to overfit the data and as such can perfectly explain the data but do not generalise to unseen phenomenons. I feel like this is a valid argument for Occam's razor without rely on the assumption that our universe is simple.
@RR-et6zp2 жыл бұрын
in realiry probability streams (QM) do predict the future
@PhillipRhodes4 жыл бұрын
Awesome! I've been waiting for this one for a while. Thanks for having Marcus on, Lex. Now if you could just interview Ben Goertzel, Pei Wang, Leslie Valiant, and/or Leslie Lamport... :-)
@TYL3R8634 жыл бұрын
BEN GOERTZEL!!!!
@PhillipRhodes4 жыл бұрын
@@TYL3R863 - Hell yeah! Lex and Goertzel would be a fun interview to watch.
@michaelmarzolf65394 жыл бұрын
Outstanding -- thank you Lex
@bp567893 жыл бұрын
This episode was one of my "I'm changed forever" moments. Haven't had a big one of those in a while.
@sterlingseah4 жыл бұрын
Your George Holtz interview led me here, both great interviews. Lossless compression as intelligence 👍🏼 🔥
@marcuswaterloo4 жыл бұрын
Holtz Ep. 2 interview sent me all over the place and it was Holtz saying AI XI is a function of compression that led back here: www.reddit.com/r/lexfridman/comments/jghx0e/lossless_compression_equivalent_to_intelligence/
@looming_3 жыл бұрын
@@marcuswaterloo I really hate truncated comments ending in links. KZbin just cannot handle those.
@rkoll33 Жыл бұрын
Marcus will build the AGI and break it immediately by asking the question that will cause instant existential crisis :) Thx, Lex, I loved this episode.
@simonahrendt90697 ай бұрын
I loved this conversation!
@dbum8964 жыл бұрын
Im a viewer from Barcelona (I live in Rubi, a town on the outskirts of the city), love the podcast and the nature of what you discuss with every single one of your guests. Given that i speak Catalan, I'd want to interject that the word aixi (pronounced in catalan ASHÍ ) means "in this way" or "like this". Keep up the quality content and thank you for the stimulating discussions you post on youtube!
@sherrivonch62314 жыл бұрын
This was interesting and I'm glad I got to see this. Thank you.
@gs-nq6mw4 жыл бұрын
Thank you,im a student and ur podcast inspire and teach me alot,love it,sometimes i spent so many hours watching old episodes but is so interesting and fun that i barely realize i just spent 5 hours listening to it
@RockandMetalChannel4 жыл бұрын
I discovered your channel yesterday and now you're talking about cellular automata, which happens to be the subject of my current undergrad thesis. Neat. Thanks for the great content!
@RockandMetalChannel4 жыл бұрын
You should look into having Dr. Jarkko Kari, he is in my opinion at the top of the field in CA.
@mikekaczmarek99552 жыл бұрын
You are an inspiration to all and I appreciate your work so much! Thank you for your efforts and I will always support podcasts from you and content like this!!!
@garymenezes68884 жыл бұрын
"But I'm not modest in this question" I like this guy
@moonsitter13754 жыл бұрын
I sounds as though AGI is getting closer to being a reality. Great interview Lex.
@quaidcarlobulloch93004 жыл бұрын
Wow, a pleasure at my end as well!
@williamramseyer91214 жыл бұрын
I love this interview. So light-hearted, and profound. I listened to it twice (and I have done so with other podcasts by Lex). My comments: 1. Free will vs. determinism. Just a thought from an amateur. Everything that happens in the universe may have been determined from the moment of the Big Bang, but each human has free will. The exercise of that free will forms the universe that that human lives in. There are a huge number of alternate universes with other humans who made different choices. In other words, we choose the universe we live in. To throw Sartre into the mix, we have no choice, but to choose (our universe). 2. Infinity. How can a finite universe contain the math of infinity? 3. Books. Lex, do you have a list of books recommended by your guests? And, what it the relative information contained in one book versus one podcast? Thank you. William L. Ramseyer
@lestorbeeny84544 жыл бұрын
You should get Dr. Ben Goertzel on! Great podcast btw as always
@TheGunmanChannel2 жыл бұрын
his accent is awesome 👌😃
@jeremycochoy77714 жыл бұрын
This is one of the most interesting video I saw this year. I like how one can have on few idea the gist of the concepts behind his research. It's also a subject I am deeply interest in. I would also recommand to have a look at the Abstract Reasoning Curriculum dataset for peoples interested in "performing well in a broad range of unknown tasks " :)
@MrAnt-hh3bp4 жыл бұрын
Lex, thank you very much for uploading this conversation! It was very informative and inspiring for me personally, as I am an Undergrad studying in a related field. It's just amazing how today I am able to follow this discussion between two great minds so intimately sitting in front of my computer screen. Keep up the good work, и привет из Германии! PS: Fun fact on 22:45, Veritasium just recently uploaded a video in which he showcases the connection between the Bifurcation diagram and fractals (the Mandelbrot set in particular). I came across the former in the context of a Neuroinformatics lecture but never made the connection. This video reminded me of it. This is amazing. God, I love the internet.
@johangodfroid49784 жыл бұрын
simpler means the more energy efficient and nature can't waste energy for this reason the biomimetism is so good. Do the best with the least: unbalanced by the law of (enough) in biology but I can't explain it in a few lines
@jovanyagathe22994 жыл бұрын
This man is a genius.
@paulbarton55844 жыл бұрын
Excellent stuff as usual Lex! Very interesting guest, and it was good to hear you briefly discussing CA. These fascinate me - Poundstones "The Recursive Universe" is such a wonderful book that I'd recommend to anyone interested in CA's and Conways Game of Life in particular.
@curiosguy98524 жыл бұрын
Lex, how do you feel talking to Andrew Ng and MJ who reject the idea of near term conversational agents while you are trying to actively build one?
@CharlesVanNoland4 жыл бұрын
I think he feels like a kiwi, at least after watching AMA#2
@smishi4 жыл бұрын
18:07 What else is noise, if it's not accumulated chaotic behavior too complex to fully grasp?
@annajoen69234 жыл бұрын
That cracked me up when lex said "that tie's confusing me" hahaha, awesome episode !
@PhilosopherScholar2 жыл бұрын
An amazing talk befitting the creator of AIXI.
@KemalCetinkaya-i3q Жыл бұрын
thank you marcus and lex
@Jannikheu4 жыл бұрын
My intuition about a non-conscious vs a self-conscious AGI is that the first probably would follow any provided optimization function (although we would eventually have trouble seeing that it does follow these goals) while the 2nd might choose to ignore the provided optimization function and follow something that is in its own interest (whatever that might be). But that would also mean that the 2nd could be far more dangerous than the 1st and therefore it would be of utmost importance finding a test whether an AGI is self-conscious or not.
@ZachDoty04 жыл бұрын
Lex, I would love it if you could interview Jeff Clune about POET, AI-GA, MAP-Elites, Quality Diversity, Catastrophic Forgetting, AGI timeline, etc... Go deep on technical details and intuitions for future research :) Thanks.
@MrRubenkl4 жыл бұрын
Lex, you've done it; I now like your podcast better then Joe's.
@ChrisStewart22 жыл бұрын
The reason why Occam's Razor works is because it is usually easier to study the simplest hypothesis and then work up to more complex explanations.
@parkerdinkins55414 жыл бұрын
Thank you for you work Lex! Keep doing what you're doing
@rajeshprajapati18513 жыл бұрын
Thank you so much. Keep up the good work.
@hanselpedia4 жыл бұрын
Got lost a bit at times, but this enjoyed it in one uninterrupted session... Was this a compressed version of a much longer dialog? And you forgot to ask if mortality would play a role in developing AGI ;-) Thanks Lex!
2 жыл бұрын
Didn't understand 99% of what was said;still enjoyed the conversation,,,
@Eyaeyaho1234 жыл бұрын
Best episode
@user-qf3lq4zj8g4 жыл бұрын
Great philosophical points focused here, enjoyed particularly Marcus *informal* definition of Intelligence (26:36) and justification (27:08). Ashi Krishnan's views on AGI would be a great follow-up **hint** **hint** (a recent sample of her thoughts: kzbin.info/www/bejne/ravLemiIqpl7orMh32m51s ).
@mriz Жыл бұрын
12:27 please anybody know what search queries for this terms?
@Olafironfoot4 жыл бұрын
"I'm looking at you, linear algebra" lol. (1:05:20)
@NirFeinstein4 жыл бұрын
It is really interesting, The future of humanity depend on the future of AGI and how to acquire knowledge is also interesting and important topic. But, what about AGI safety? It is very crucial component
@powerpig994 жыл бұрын
I would agree our flaws are the cause of our accidental existence and future improvement of human intelligence.
@myrealnews3 жыл бұрын
I called it "condensation". "Compression" produces heat. Condensation produces structure. But there is no chemical or prior art or science equivalence. The structure is generative and can make valid predications and reformulate the temporal structure from new learning.
@kobiromano61154 жыл бұрын
25:40 Arguably, we have yet to find "the simplest rules of the universe", there's still so much unknown which we don't understand, the gaps between general relativity and special relativity and the quantum field, weird quantum behaviors like entanglement and the "field" equations which we can't really explain, the prediction of dark matter which we cannot observe, and things that are explained by complex math which doesn't have a real representation or that its representation is questionable. It's pretty vain to claim that we have "cracked it" at this stage, with so many open questions.
@ryanpalmer81804 жыл бұрын
This was fascinating and got me thinking about lossless vs lossy compression and how perhaps lossy could be superior for an AGI in certain circumstances if done in the right way. Humans are our best example of a general intelligence, but we have terrible memories. Perhaps this semantic compression is a core feature? Could an AI with a superhuman memory actually pass the Turing test (unless it deliberately acted unintelligently to deceive)? It feels like we tend to take detailed knowledge and distill it into abstract symbols with weighted importance. These get more general and abstract over time until they fade away if not rehearsed / used. Think of how much you remember of this video immediately after it finishes, compared to in an hour, a day, a week and a month and a years time. The description of it to another person would get more vague and loose unless you rewatched it, but you would hold the key points longer than the details and know to come back and reference it if it became relevant in your life. It would also seem that more abstract symbols are easier to generalise - i.e. specifics about chess and checkers might not be immediately relevant to each other, but broader tactics of attack and defense may be. Would an AI trained on Starcraft be quicker to learn DOTA to a certain level than a fresh agent? I would guess not if it had to search through all the detailed Starcraft tactics it had learnt as most would be irrelevant, but perhaps it would be faster if it used a more abstract symbol tree to inform it of the more likely paths to try in the new environment. Furthermore, could it take this combat training and use it to improve performance in a very different field such as poetry or counselling for trauma victims (things which might be natural for an ex-soldier to learn, partially built upon their past experience). Creativity seems like an important part of our intelligence, and one definition could be 'combining disparate ideas in new and novel ways'. I would imagine connecting general ideas from disparate fields is simpler than specifics, in which case a more abstract (or 'stupid / lossy') semantic tree of two subjects may be easier to intersect and therefore exhibit more spontaneity or uniqueness in its behaviour. I did a quick search and I found these pages which seem very relevant - Intentional forgetting in AI systems : link.springer.com/article/10.1007/s13218-018-00574-x Semantic compression: en.wikipedia.org/wiki/Semantic_compression I also think this Byron quote is particularly appropriate: “To be perfectly original one should think much and read little, and this is impossible, for one must have read before one has learnt to think.”
@yennikcire4 жыл бұрын
Sehr interessant, nice stuff!
@josephsmith67774 жыл бұрын
The orange yellow charcoal color shceme is crazy
@VioletPrism4 жыл бұрын
Kind of baller tho 👀
@xSNYPSx4 жыл бұрын
Do you really Russian Lex ? Wow,I am happy that our nation have such good guys ! :)
@lucasthompson16504 жыл бұрын
1:22:33 I'd say our flaws contribute to the minimum diversity of our experience, but I'm only 1/8 Russian.
@doctora32624 жыл бұрын
You have a new subscriber. Liking and commenting for the algorithm.
@josephbertrand55584 жыл бұрын
Tremendous!!! 🇨🇦
@cysiek104 жыл бұрын
Lex, can you enable support option on KZbin? It might be easier than Patronite.
@nothingisgiven83642 жыл бұрын
Occam's razor works based on probability not simplicity. Hypothesis A, The universe is manifesting the highest probable outcome. B. The universe manifesting the simplest outcome. C. The universe is manifesting the highest probable outcome because it is the simplest. The conditional probability in C makes it less likely to be true then A or B.
@Stadtpark904 жыл бұрын
01:06:26 „Now let‘s start simple...“
@JLGMediaProductions4 жыл бұрын
1:13:37:250 "Infinity Keeps Creeping Up Everywhere"
@Adilthepickle4 жыл бұрын
You’ve got both out-tied and out-suited homie
@XxNoV4xAiRxBoRsxX4 жыл бұрын
i really love this one
@andrewkelley70624 жыл бұрын
Well colored me impresses, this guy really knows what he is talking about.
@meat_computer2 жыл бұрын
Regarding the preference of simple explanations by brains I have a simple explanation: it take less (metabolic) energy to work with a simpler model.
@sippy_cups4 жыл бұрын
Does AIXI break down if you are moving at the speed of light? Time-steps would be altered by relativistic effects, no?
@Maxim.Shiryaev4 жыл бұрын
Just in case, Occam's razor is not about "simple". It is: "when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions". So for example, Newton's law of gravity is not that simple - Calculus was invented to solve, but is based only on two equations/assumptions - for a force as a function of mass and distance and for an acceleration as a function of force and mass. Before that, Ptolemy's model of planet motion was way simpler and didn't require Calculus at all, but was based on a large number of coefficients of unknown origin. So Occam's razor prefers complex language and minimal set of axioms over simple language and large set of axioms.
@6DonnieDarko4 жыл бұрын
Reward is just rank and number
@johangodfroid49784 жыл бұрын
so much pocdact so quickly: you are a real machine. I could compress it to 10 mb or less but it would be 100% not understanable by a human anymore
@vev Жыл бұрын
Consciousness will likely emerge in any system that behaves intelligently and human-like enough for us to ascribe consciousness to it. Whether it is 'real' consciousness or a 'philosophical zombie' displaying the traits is hard to determine and may not matter for AGI development. The ethics of how we treat such systems will be crucial.
@vev Жыл бұрын
I appreciate you taking the time to share your perspective and insights here. Discussions like this help me improve. You make a fair criticism that it's an oversimplification to say human reward functions boil down to just survival and reproduction. There are clearly many nuances, layers and complexities involved. I agree that intrinsic rewards, self-generated goals, and higher-level abstract thinking all play major roles in shaping human behavior and motivation beyond basic genetic drives. Your example about waste management emerging in part from an intrinsic dislike of the smell illustrates how lower-level rewards can bootstrap into more complex, long-term behaviors. In general, you're right that the human reward function and decision-making process involves a nuanced, hierarchical system ranging from innate drives to complex rational thinking. Reducing it down to spreading genes is missing important pieces of the puzzle. Discussing and critiquing perspectives like this helps advance the conversation on modeling human intelligence. There are still many open questions and room for debate. I appreciate you taking the time to add these thoughtful points! Critique and debate is how we make progress. This gives mefood for thought on the limitations of certain simplifying assumptions.
@meat_computer6 ай бұрын
What was the name of the principle where you don't discard any theories when they all describe the data equally well?
@androidsdream93494 жыл бұрын
I googled “aixi application in robotics” and got back ‘Did you mean: “ai applications in robotics”’. I did this since I disagreed with Hutter saying that we don’t need a robot rolling around doing things to test his aixi agent for AGI. Apparently google didn’t even recognize the application and query. We need to look beyond aixi application to games/game theory. Just learning/solving/playing games is not AGI, it is a narrow application, even Hutter admits that early on. My view is that an “optimal” AGI will include “punishments” and not just “rewards” and many other types of learning modes, not just reinforcement learning.
@trimbotee46534 жыл бұрын
Lex this is a really good podcast. Thanks for the hard work. I know the name of the podcast is the AI podcast, but I wonder how seriously people take the idea of general artificial intelligence? To me it seems ludicrous. But this very smart gentleman (and many many others) seem to take the idea seriously.
@robocop303014 жыл бұрын
How is it ludicrous? How do you know I'm not an ai?
@hughcaldwell10343 жыл бұрын
Just found this video, and this comment. Late to the party but whatever - why does it seem ludicrous to you? And which bit? The idea that it could exist in theory or the idea that humans would be able to do it?
@sherrivonch62314 жыл бұрын
Stephen Hawkings was amazing too.
@TimmyBlumberg4 жыл бұрын
爱 (AI) was originally from Chinese, and adopted by Japan. It does mean love in move language. 爱 is pronounced as “eye”.
@lillytaylor82624 жыл бұрын
Love the brown suit with yellow tie
@tobskii10404 жыл бұрын
Wait, what is there to gain from having the agent predict the action it's about to take?
@josecoyote60794 жыл бұрын
Very interesting everything about AI
@youcefnafa2267 Жыл бұрын
I really did not get the relationship between human survival and interest in science (9:20), since humans survived without any interest in philosophy or AGI for millenia. Having this abstract discussion does not seem like the best thing to do to survive and spread genes, yet many of us do it.
@ioannismourginakis683 жыл бұрын
1:23:47 lmao that look he gives says it all that comment was totally left field
@adamgolding4 жыл бұрын
29:09 but if the trees die so do we in ecological collapse--arguably this individualistic definition of 'intelligence' leads us to make 'dumb' choices ecologically
@achunaryan34183 ай бұрын
His tie makes him look like a nostalgic salesman of Windows 95.