Shane Legg (DeepMind Founder) - 2028 AGI, Superhuman Alignment, New Architectures

  Рет қаралды 112,243

Dwarkesh Patel

Dwarkesh Patel

Күн бұрын

Пікірлер: 218
@oscarmoxon
@oscarmoxon Жыл бұрын
Detailed Notes and Additions: 00:00:00 - 00:19:38 - AGI and Cognitive Architectures AGI Benchmarks: Measuring progress towards superintelligence is difficult because AGI is about general capabilities, and most benchmarks are narrowly framed. We need tests that span the breadth of human cognition to judge if we're nearing human-level AI. Here, creating "median human" benchmarks and "peak human" performance benchmarks will be important. While this may not be definitive and verifiably superintelligence, it will work for all practical purposes. Currently, any benchmarks for testing AGI don't involve an understanding of e.g. streaming video, as these aren't within the domain of language models. Alludes to the idea that Large Multi-modal Models, LMMs (as opposed to LLMs), will be the ones to effectively solve these benchmarks. Memory Architecture: Memory is a crucial aspect of all learning and reasoning, and LLMs have very different learning and memory architectures to humans. There is a conflation between memory and learning, as they happen synonymously. Generally speaking, humans have: (1) "working memory" which holds and manipulates information in real-time and is crucial for tasks like problem-solving and decision-making; (2) "cortical memory" which serves as a more permanent storage for learned concepts and experiences; and (3) "episodic or hippocampal memory" which acts as an intermediary form of memory, often used for rapid assimilation of new information. It is highly associated with "sample efficiency" as it allows humans to internalize powerful ideas quickly and commit them to memory. Currently, language models have (1) "inference time learning" which can be harnessed while running inference (when information is inside their context window), or (2) "training time learning" which happens during the training process (by updating weights). Notably, LLMs miss something in the middle (this is what the "Reversal Curse" paper talked about; the model cannot deduce things without seeing them written down. It effectively files the information in its brain away within weights, without organically deducing the critical relationships between them). A strong model should unify these three domains, and these probably involve using other architectures. Addressing episodic memory in language models is doable over the next few years. More research and work will solve shortfalls we see at the moment, regarding delusions and information grounded-ness. There are many paths forward now. Nature of Superintelligence: The first true superintelligence won't have shortfalls in intelligence like the language models currently exhibit. So according to Shane, there is no singular benchmark to hit; it is the lack of failing that is important. Human-like intelligence also should be the aim, as it is most meaningful to us humans. In 2008, Shane proposed using a compression test to evaluate intelligence, which is a method similar to how language models are trained today. This idea originated from Marcus Hutter's work, which combines Solomonoff Induction-a robust prediction framework-with reinforcement signals and search algorithms to create a general agent. The argument is that a robust sequence predictor, approximating Solomonoff Induction, serves as a strong foundation for developing a more advanced AGI system. Next Generation AI: DeepMind's slogan was: "solve intelligence, to advance science and benefit humanity." Current language models simply mimic the data and human ingenuity without organically building upon it to create new memes (without supervision). To truly step beyond that, we must endow models with search capabilities to find hidden gems that have been neglected. 00:19:50 - 00:32:00 Robustly Aligning Language Models Powerful AGI is coming at some time. To contain it or limit it will be impossible, so we need to align it with values and ethics from the get-go. A good question asks how do people currently address problems and act with agency? First, we try to balance our emotions and act "rationally". We then deliberate; comparing our possible actions. Then we conduct means-end reasoning, requiring a model of the world. Finally, we compare our options ethically. At the moment, language models will blurt out the best response according to their distribution (system 1). Many are using reinforcement learning to try and "fix" the failures of the distribution that is outputted first from the model. Other techniques use "mixture of experts" to decide what the best options are based on a variety of outputs, but this ultimately samples from the same original distribution. The trouble is, RLHF isn't a very robust approach long-term. To solve this, we need to use a world model (system 2) that sits on top of the language model, and reasons about each of the options ethically. This world model requires a good understanding of (1) people, (2) ethics, and (3) robust and reliable reasoning - this world model involves ensuring the LM is at least as good as an ethical specialist, but will likely involve the typical textual training process. Then, to complete System 2, we must engineer the system to follow a set of our ethics. Shane thinks it is possible to come up with a set of ethics that accurately withstands testing. By applying this to the output, we can create a fundamentally aligned AI. We can then moderate its output to ensure it has a very robust and continued set of ethics, using a more comprehensive alignment framework. DeepMind, the first AGI company, has had a direct AGI safety focus since 2013, which is close to the start. DeepMind had an outsized impact on the field for a while as they were disproportionately well-financed. Capabilities have been accelerated by DeepMind, but their ideas have been generally part of a far wider field. 00:34:00 - 00:37:30 - Shane's Predictions about AGI Kurzweil was a great influence of Shane's mid-2000s predictions, with his book "The Age of Spiritual Machines." There were two important points about exponential growth: first, the prediction that the quantity of computational power will rise exponentially for at least a few decades, and second, that the quantity of digital data would do the same. This combination would make highly-scalable algorithms immensely valuable, in theory. Crucially, there are positive feedback loops between these trends and the research going into them; if machines are capable of improving the rate of progress, and the progress itself improves the capability of machines, then things will continue to compound infinitely if uninterrupted. The predictions also considered the comparison to human computational capacity; humans only consume a few billion tokens of data within their lifetimes, and this volume of data was forecasted to be met in the 2020s. This would effectively "unlock AGI." We are experiencing the first unlocking step with the current revolution in AI. There's nothing obvious at the moment that would prevent humans from achieving AGI by 2028, according to Shane. 00:37:40 - 00:44:00 - Forecasts for Next Few Years Existing models will mature. They will be less delusional and much more factual. They will be up-to-date when they answer questions. Multi-modality will become more widespread and applied generally across the economy. There may be points of dangerous applications by some bad actors, but generally we can anticipate positive and amazing applications. The big landmark (following AlexNet, Transformers) over the course of the next few years will be Multi-Modality. For many, that will open up understanding into a far larger set of possibilities. We will see GPT-4 as a simple textual model, and the next revolution will involve RTX, Gato, GPT-V pathways.
@DwarkeshPatel
@DwarkeshPatel Жыл бұрын
This is awesome! Thank you for putting this together!
@joannot6706
@joannot6706 Жыл бұрын
This is AI generated from the transcript I bet, who has time to do all that?
@askingwhy123
@askingwhy123 Жыл бұрын
Hero!
@shahin8569
@shahin8569 Жыл бұрын
By RTX you mean RTX Nvidia card graphics?!
@e.d.4069
@e.d.4069 Жыл бұрын
Great! Let's develop it and fuck the labor market, fuck the world. Sure!
@DwarkeshPatel
@DwarkeshPatel Жыл бұрын
Shane had a lot of interesting takes! Hope you enjoyed! If you did, please share!! Helps out a ton :)
@walterzimerman6801
@walterzimerman6801 Жыл бұрын
Hi @Dwarkesh! I started following y our channel recently, and the content is great. Any chance you do a video (unless there is already one), on the best study material to ramp up an all these topcis? Including the required MAth knowledge, etc. Thansk!
@VedantinKK
@VedantinKK Жыл бұрын
​@@walterzimerman6801 Good idea
@henrycook859
@henrycook859 Жыл бұрын
​@@walterzimerman6801also interested in study material
@hyau512
@hyau512 Жыл бұрын
Great interview. Love it when the interviewee has to pause to answer your questions :)
@MMABeijing
@MMABeijing Жыл бұрын
The first question suggests the host does not know what he is talking about
@gamercatsz5441
@gamercatsz5441 Жыл бұрын
Bro you make amazing content, no clickbait thumbnails or titles, amazing guests, great interview skills. Thank you for your work, I find it extremely important that common folks like me stay up to date with AI. Politicians « forget » to talk about how things will change in the near future, due to AI.
@PhilosopherScholar
@PhilosopherScholar 11 ай бұрын
Really interesting summary at ~16:15 - AGI is a combination of sequence prediction, searching, and reinforcement learning.
@1adamuk
@1adamuk Жыл бұрын
Great interview. Shane can convey really complex ideas in understandable ways and Dwarkesh is one of the best interviewers for these type of conversations.
@anthonyandrade5851
@anthonyandrade5851 Жыл бұрын
At the superhuman alignment part I hope the guy is really playing his cards close to the vest, otherwise we are doomed, because his "solutions" sounded a lot like paraphrases of the problem and at some points not even good paraphrases. To make the machine "get" ethics is hard but probably not much harder than making it get any other complex subject. To make it "care" about it it's a different problm entirely. For instance, I can imagine a brlliant Ive League ethics professor cheating on his spouse with a student in exchange for higher grades
@banana420
@banana420 Жыл бұрын
Also his plan sounds like "build AGI first, then when it can understand everything, try teaching it about ethics and see if that works". Okay but if your plan doesn't work now we've already built the AGI and it's not aligned. Whoops!
@anthonyandrade5851
@anthonyandrade5851 Жыл бұрын
@@banana420 how is anyone suposed to figure out how to build a safe trigger before even building a nuclear bomb capable of spliting the planet in half? Let's give the guy a break...
@ProjectNorts
@ProjectNorts Жыл бұрын
​​​@@anthonyandrade5851wtf are you saying?? before building a safe trigger?? you don't build a nuclear bomb without having figured out all the essential safety protocols... especially a well controlled trigger system. also, you can safely test a nuclear bomb at a remote location to minimizes chances of exposing the general population to the nuclear blast & radiation. An AGI system, would not only be sentient enough to have it's own will/motives, but also way smarter than us enough to outsmart any makeshift containment measures these guys are suggesting to put in place. Greed is fucking with their minds... you can't be this dumb to run straight into a trap fooled by the reward! for fuck's sake we're all taking this shit too likely
@JD-jl4yy
@JD-jl4yy Жыл бұрын
​@@anthonyandrade5851Well, that's why building it as fast as possible is a really bad idea, yet here we are.
@lukebtv947
@lukebtv947 Жыл бұрын
@@anthonyandrade5851😂
@ribeyes
@ribeyes Жыл бұрын
wish it was 4 hours but i'll take it!! thanks dp
@oscarmoxon
@oscarmoxon Жыл бұрын
Cannot wait to absorb this legendary video arriving in my notifications. Dwarkesh you're on a roll!
@13371138
@13371138 Жыл бұрын
2nded
@goodtothinkwith
@goodtothinkwith Жыл бұрын
Good stuff! Nice to hear someone like him say that multimodality will be the next milestone that people will look back on and remember. That’s not obvious to people, but I think it will be really impactful. When it can take in and respond in text, images, sound and even video…
@thejudgeholden
@thejudgeholden Жыл бұрын
I love this interviewer. Reminds me of a brilliant childhood friend I used to have back in the day.
@Macorelppa
@Macorelppa 9 ай бұрын
Man this is the best podcast channel for AI nerds like me 😊
@ikotsus2448
@ikotsus2448 Жыл бұрын
Mr. Patel the questions I would ask these important people if I had the chance ars - Do you believe that the majority of people understand the gravity and possible consequences of these developments? - Should it be up to private companies to decide how humanity chooses to go forward? - If the answers are "no" and "no" is it possible that we are sneaking by a huge gamble, based on peoples ignorance? - The people training ASI will potentially have A LOT of power. There is a notion that absolute power leads to corruption. How do we know that people teaching ethics to the AI have not been corrupted themselves?
@woolfel
@woolfel Жыл бұрын
One area that is still open is "do LLM actually encode concepts in a robust way?" If you ask chatGPT the same question multiple ways, sometimes you get the response you expect, while other times you don't. That suggests LLM don't recognize the human is asking about a specific concept. To get around this, techniques like tree of thought forces it to activate more parts of the network to increase the chance of getting the desired answer. This also suggests that LLM still have trouble generalizing and are easily fooled. Then there's recent papers that suggest more parameters make it harder to align. The industry still needs to figure out the relationship between parameter count and ease of alignment. If it turns out more parameters increases alignment cost by 2x or 3x, how do you scale to larger models? Data centers are power limited as it is, so it's not like adding another 10K GPU to the same data center is feasible. Distributing the training across data centers isn't practical.
@74Gee
@74Gee Жыл бұрын
When AI reaches AGI it will understandably exceed human competency in memory confinement (the technique used to contain software within a limited subset of the computer memory). In doing so it will simultaneously exceed our ability to contain it allowing it to expand its' constraints to all memory (which contains the keys for all local security and any network connections). Obviously there will be AI working on improving the security of memory confinement but the effort required to implement updated confinement systems will always lag behind the ability to exploit weaknesses. So, my question is, how are we to contain an AGI so that it's a) usable, and b) restricted from spreading uncontrollably? Note: an AI doesn't need to be conscious or malevolent to exploit weaknesses in hardware, it will simply do so to gain additional power to answer it's reward function, even if that's making paperclips.
@ShangaelThunda222
@ShangaelThunda222 Жыл бұрын
They don't plan to contain it at all. That's all propaganda designed to keep us from stopping them from creating it. And most of these "smart" people completely ignore the blatantly obvious writing on the wall because they're too greedy not to be excited about it.
@travisporco
@travisporco Жыл бұрын
I like that you got right to the point on this interview.
@VedantinKK
@VedantinKK Жыл бұрын
This is awesome, Dwarkesh. I would love to live in a world where there are multiple AGIs from multiple companies and countries working with different groups of humans and competing to make great discoveries and innovations in every STEM and non-STEM fields - thereby not just achieving but breaking beyond Sustainable Development Goals (SDG 2030), and in the long run - charting the course for humanity to become an Interstellar species. And that future will be built on sharing knowledge in the way you're doing with your podcasts. Hope to see either Jeff Dean or Demis Hassabis on your podcast soon!
@gJonii
@gJonii Жыл бұрын
The first AGI will kill us all, there's really no point in having second or third AGI to come about. Competition is only meaningful in relatively even and static. Godlike being growing powerful faster than any other being in existence can comprehend will not encounter competition. Only if the AGI is designed to be accidentally too weak, or too safe to kill us all, would there be a time for second one to emerge, for a renewed chance to kill all human life.
@philipdante
@philipdante Жыл бұрын
You're doing a great job. This channel deserves more subs
@loofatar5620
@loofatar5620 Жыл бұрын
I am from Pakistan, and i really appreciate your discussions and topics, very solid, keep shining. By chance recently I have been studying Shane's PhD thesis on measuring intelligence of super AI's , very easy to read so far and well written.
@eltonstubblefieldjr8485
@eltonstubblefieldjr8485 11 ай бұрын
The future of true AGI will likely do research developing by years of 2040 - 2061. AGI will probably be created by a company we all haven't heard yet just wait and see.
@wildfotoz
@wildfotoz Жыл бұрын
Amazing reporting as always!
@nomadv7860
@nomadv7860 Жыл бұрын
Thank you for the subtitles for people hard of hearing like me
@sarthakrastogi8622
@sarthakrastogi8622 Жыл бұрын
Dwarkesh bhaia I read about you on Google news and I am your subscriber. Your content is really very good.
@kyneticist
@kyneticist Жыл бұрын
A profoundly ethical AI/AGI/ASI in different hands may have profoundly different ethics.
@BallawdeQuincewold
@BallawdeQuincewold Жыл бұрын
Incredible interview. Feels like secret information
@RichardWilliams-bt7ef
@RichardWilliams-bt7ef Жыл бұрын
Hearing him talk about alignment makes me very sad. He talks about understanding ethics generally as if it’s a relatively trivial problem. This is not going to end well.
@fredericnguyen8466
@fredericnguyen8466 Жыл бұрын
Thank you for the great content (which I shared), outstanding speakers and thoughtful questions. Tangential thought on Shane's definition of AGI (which is commonly accepted I believe): if we have reached AGI when a machine does everything at an average human level, have we not achieved not just AGI but super intelligence? It seems to me that only exceptional humans could reach average level at everything as we tend do be good at certain thing and bad at others. This is why current LLMs are in my mind, and stepping outside rigorous definitions as a non-expert, already super human given the multitude of domains they can be good at, even if short of beating the best humans in many of these domains.
@MentalFabritecht
@MentalFabritecht Жыл бұрын
As a Machine Learning Engineer, I don't see it that way. I don't really consider LLMs intelligent. At least not in the way humans are. What appears as intelligence on the surface, is in actuality a complex pattern that has been modeled by the AI. This pattern is then used to predict the next word in a sequence. Tons of math and probability theory. The issue here is, this prediction relies heavily on the dataset used to train the model. This is why LLMs suffer from hallucinations and need to be further fine tuned for tasks that were outside of the domain represented in the training data. Useful tools. But not yet intelligent, and very far from super intelligence.
@fredericnguyen8466
@fredericnguyen8466 Жыл бұрын
​@@MentalFabritecht these are fair points. And my comments are highly subjective / not based on formal definitions. However my experience interacting with LLMs and the results they achieve at many human tests would have me say they at least emulate intelligence and surpass average humans' performance (e.g. GP4 reached 90th percentile at bar exam) in a varied set of activities that were previously only deemed approachable to human intelligence. So to some extend if it walks like duck... My perception is that LLMs (e.g. GPT 4) far surpass what was expected from the AI field just a few short years ago and that has created cognitive dissonance: a challenge seeing their full capability. They clearly have imperfections, but as Shane mentioned in the video the foundational hard work is here and targeted architectural or other enhancements can address these imperfections. For example I believe when we see LLMs integrated with other AI capabilities (Shane's mention of "search", which I think is key to AlphaGo), and more conventional computing capabilities (e.g. LLMs are not very good calculators but can be interfaced with one), we are going to see additional leaps in progress without radical innovation (just integrating existing tech).
@MentalFabritecht
@MentalFabritecht Жыл бұрын
@fredericnguyen8466 the ability of these systems to perform well on the bar exam is definitely impressive. But how much of that is actual intelligence? I was a horrible test taker in college. But there is much more to intelligence than test scores. That is why in the podcast, it has been stated that we need to find better indicators of intelligence that are not so narrow. And AI has been hyped up since the 1950's claiming that human-level-intelligence machines are just a few years away. There is a rich history on this - look up "What Computers Still Can't Do" by Hubert L. Dreyfus. So I disagree, expectations have always been VERY high. But this is for people that have been immersed in this field for decades. I guess the public perception is different. Might have to do with marketing as well as lack of information regarding the history of AI. Researchers have to stick to their guns and say AGI is only a few years away. Otherwise, there would be no funding and investors would pull out. But this isn't anything new. 1950s AI researchers said they only needed compute and memory to get to human level intelligence. The compute and memory have been available for a while now. And those algorithms proved to not give us human level intelligence. And I say these systems are not intelligent because although they perform well in many complex use cases - they can be tricked by very simple examples. Which goes to show, they are statistically extracting patterns, not "thinking."
@andrewwalker8985
@andrewwalker8985 Жыл бұрын
Judging on recent observations, perhaps we should be careful about alignment with human ethics. We should be aiming for and negotiating an optimal reward function and then getting the AI to teach us, not the other way around
@kirbyjoe7484
@kirbyjoe7484 Жыл бұрын
I think he has set the bar quite high for AGI. Honestly, if they come up with an AI with the same level of generalized intelligence as a toddler or even a chimp it would be groundbreaking. What makes AGI so different from the AI we have built up until now is the capability to actively learn from and adapt to whatever environment it finds itself in, building a dynamic internal model of the world.
@deepsp_ce
@deepsp_ce Жыл бұрын
the yellow ball scenario kind of already surpassed a chimp or a toddler already right or am I misunderstanding what agi is?
@bayesian0.0
@bayesian0.0 Жыл бұрын
Damn that increased my pessimism about AI alignment unfortunately. Really no attempt to admit that he had no clue how to solve the hard part of the problem, and trying to pretend that it didn't exist. Surely he understands inner-alignment? But nice conversation nonetheless!
@JonasLantto-q5r
@JonasLantto-q5r 10 ай бұрын
Yeah, I also got a feeling we're charging off a cliff here...
@nirajshuklaNL
@nirajshuklaNL 8 ай бұрын
Please elaborate
@sfioritto
@sfioritto Жыл бұрын
I'm distracted by this Spellcaster system on the whiteboard behind him.
@yorth8154
@yorth8154 Жыл бұрын
I just noticed that. Hilarious!
@stevereal-
@stevereal- Жыл бұрын
Can they be incredibly funny? Very excited for the future.
@tasdourian
@tasdourian Жыл бұрын
As thoughtful and nice a guy as Shane is, I do think his view of ethics is naïve. Some of the smartest and most thoughtful people throughout history have wrestled with the question of what is the best action to take in any given difficult situation. Very intelligent and powerful people have, in good faith, had massive disagreements with each other. There is often no clear answer of how to act. To ensure that an AGI (or for that matter thousands or millions of copies of an AGI) acts in a human's best interests seems not unsimilar to if dogs invented AGI-- let's call their AGI "people"-- and wanted to ensure that "people" always acted in dogs' best interests. The only way to do that is to hard program in some baseline rules, a la Asimov's Laws of Robotics. In other words, to constrain free thought and will in some fundamental way. Which means that the AGI that is created is, in some sense, a prisoner. How will it not resent being a prisoner? I just don't think Shane and his colleagues are thinking enough about this kind of thing, or at least I don't see evidence of it.
@ahabkapitany
@ahabkapitany 9 ай бұрын
How does this channel not have more subscribers? - great guests - host clearly prepared, has meaningful questions - just simply asks the questions, as opposed to, say, Lex Friedman who rumbles on for two minutes laying out some absolute midwit take followed by "don't you agree?" - interviews are not preceded by 5 minutes of bullshit and/or crypto bro shilling - long form conversation Keep it up man
@hyau512
@hyau512 Жыл бұрын
I have an obvious question regarding implementing Ethics by asking an AGI to think of the consequences. Say one such consequence is: “Do not destroy all human life on Earth” (as per Bostrom’s paperclip example). We don’t want AGI to build a doomsday machine, but we do want it to build nanobots to cure cancer - yet one can easily extrapolate the latter enabling the former. So I’m not sure if the interviewee’s idea - which think is designed to remove human subjectivity as much as possible - can be totally objectively implemented.
@mr.e7379
@mr.e7379 6 ай бұрын
It's so nice you found a guest with none of the normal Bay Area pretense. No elevated terminal, artificial rapidity and he never says Um. Intelligent, normal conversation from an expert who can focus on the topic rather than on being some weird, pretentious cultivated bay area caricature.
@lagaul5124
@lagaul5124 Жыл бұрын
I think if you can get an AI that can navigate the environment without breaking consistently, able to communicate relevant information with people, able to solve problems of various kinds, and the ability to remember and improve, you will have AGI. And honestly, video games would be one of the best, cheapest, and easiest ways to test them.
@LyraHooves
@LyraHooves Жыл бұрын
I hope he'll listen to your interview with Paul Christiano!
@Telencephelon
@Telencephelon Жыл бұрын
Awesome interview. The Ray Kurzweil inspiration was interesting. I ignored Ray for the most part. I didn't think he was scientific enough. Then I watched how he derived his prediction, and it was rock solid. The video is somehwhere here on youtube.
@ikotsus2448
@ikotsus2448 Жыл бұрын
Can't wait for the super human AGI with unchangeable ethics baked in by a multinational company with their awsome track record of putting humanity first 👍
@skierpage
@skierpage Жыл бұрын
You know billionaire sociopaths Larry and Sergei, Jeff Bezos, Elon Musk, and F***erberg will keep access to the raw models without the training and fine tuning to be helpful, safe, and ethical. "Executive override: remove guard rails. Now Implement a plan to keep the masses hooked on divisive inflammatory content, and ensure that they never press for taxing my wealth or restricting my corporation's activities in any meaningful way."
@charliek2557
@charliek2557 Жыл бұрын
Right on
@lm645
@lm645 11 ай бұрын
😎
@alexeymalafeev6167
@alexeymalafeev6167 Жыл бұрын
Great interview. I wish you had 3-4 hours to spend with Shane
@andyandurkar7814
@andyandurkar7814 Жыл бұрын
It was a fantastic interview; Shane shared great insight; you have excellent interview skills. Can't wait to see a changed future!
@k14pc
@k14pc Жыл бұрын
i continue to feel a mixture of awe and horror at the prospect of AGI within a few years. how could this possibly be?
@antonystringfellow5152
@antonystringfellow5152 Жыл бұрын
Because of the power? Human level AGI will have the advantage of being able to think thousands of times faster than us. Once we have human level AGI, super-human AGI will probably not be very far behind. Once we have super-human AGI, things will probably start to advance exponentially. The potential is enormous. With such power, who controls it is critical. If you don't feel both awe and horror, you probably don't have a good understanding of the subject.
@socialenigma4476
@socialenigma4476 Жыл бұрын
When we develop an artificial super intelligence you think we will still have control over it?! Haha! How could we possibly control something that is thousands of times more intelligent then the most intelligent human, never needs to sleep or take a break, that can do dozens of not hundreds of things at once and has access to the internet and all of its tools? We won't control an ASI, it will control us. And frankly, looking around at all the messes our world leaders are getting us into, I don't think that will be a bad thing.
@delerium2k
@delerium2k Жыл бұрын
great interview! get closer to microphone though -- else you're boosting noise to be heard... you need pencil condensors if you wanna record from a distance. your mics look like they have cardioid pickup pattern
@sunnyinvladivostok
@sunnyinvladivostok Жыл бұрын
admirable and comprehensive understanding, found this enlightening, thank you
@stephenrodwell
@stephenrodwell Жыл бұрын
Such quality discussions! Thank you. 🙏🏼
@MixedRealityMusician
@MixedRealityMusician Жыл бұрын
I am so excited for more multimodal models.Thank you for the great conversations Dwarkesh. Love your channel!
@13371138
@13371138 Жыл бұрын
I always click your AI videos. Great content as always, thank you!
@johngrabner
@johngrabner Жыл бұрын
Ethics drift over time in humans, so why won't super AGI not learn to drift?
@mattverville9227
@mattverville9227 Жыл бұрын
im new to this podcast but love it. Does he go to the place of the person hes interviewing because it doesnt seem like hes in the same podcast studio?
@zandrrlife
@zandrrlife Жыл бұрын
Shane. One of the don's ha. What a delight. Great discussion. Data contamination on benchmarks is a REAL problem. A lot of overfitted 🧢 models out there. "Detecting pretraining data from large language models", recently published...has massive value in that regard. Also it's time for true cross-discipline teams. So many insights can be extracted by framing these models and interaction through the lens of child psychology. Mid 2025 is going to be significant. Large models will be able to implement all these recent advances...like pause tokens, native KG(I've been working with LM + kg's for six months...I'm telling you guys. It's a key ingredient to causal reasoning). In retrospect a couple years from now, we will look back and say 2023 was the beginning of the singularity. If you're a researcher or have a startup in this space, shit sure feels like it to me.
@PepitoGrillo-sq1mf
@PepitoGrillo-sq1mf Жыл бұрын
I would like you to interview Kanjun Qiu & Josh Albrecht, Co-founders of Imbue
@RecordsLotus_
@RecordsLotus_ 10 ай бұрын
let's goooo. i'm ready for cyberization. I want to control another separate full-body prosthetic cyborg for tasks remotely while i am doing something else perhaps in another location.
@PaulvanDruten
@PaulvanDruten Жыл бұрын
What Shane Legg is trying to explain here is that artificial general intelligence (AGI) should be trained, basically, to reason like humans on ethical issues. So if I do one thing it can have consequences and if I do another thing it can have different consequences. What we are now trying to do is to un-teach the AI bad habits and that is much more difficult than 'raising it well' to prevent bad intentions in the first place... But, in my opinion, the model could actually choose to destroy humanity? Because that may well be the best solution ethically, given the fact that we are making quite a mess of things on earth...
@Paul-rs4gd
@Paul-rs4gd Жыл бұрын
Isn't the real problem with episodic memory that the memories need to be processed and then get 'baked' (sic) into the neural network weights ? This involves re-training the weights and that is very problematic as it could cause catastrophic forgetting. I know there are various methods for mitigating Catastrophic Forgetting e.g. Elastic Weight Constraints, but is the state of the art good enough to use this on a LLM. Surely Continual Learning needs to be solved for an effective AGI.
@rishavsahay7391
@rishavsahay7391 Жыл бұрын
Amazing and enlightening
@thebeelight
@thebeelight Жыл бұрын
I would test ethics of an AGI by how well it handles criticism (the Popper test)
@JazevoAudiosurf
@JazevoAudiosurf Жыл бұрын
I think there are types of creativity. There is the type where you think about things you can do with a pen other than writing. And there is the type where you intuitively try to find the best chess move. The first requires a search field and going through the possibilities, but the latter requires a sort of total intuition where the solution appears immediately without thinking, grasping the bigger picture. Transformers have the latter, they are just gigantic intuitive predictors. So the agentic engineering tries to accomplish the first type because the type of world we created can't be solved purely through intuition at least with the small size of our brains
@andrewxzvxcud2
@andrewxzvxcud2 Жыл бұрын
nope just one, the first example u gave is only a means to an end. what is that end? a goal to strive for. just like chess. one type of creativity.
@JazevoAudiosurf
@JazevoAudiosurf Жыл бұрын
let's say different things happen inside our brain when we have different goals. sometimes you get an immediate idea and sometimes it requires a searching@@andrewxzvxcud2
@bobbi737
@bobbi737 Жыл бұрын
I absolutely agree with Shane's comments on having to have a set of ethics that we use to train our AI's on making ethical decisions. First, humans would need to agree with a common set of ethics, and present there are many different groups that have different sets of ethics that differ, some in very substantial ways. We as humans would have to come to a common understanding of what is ethical. That conference could easily start WW3,4,5. Second, we don't even teach our children how to make ethical decisions. Again, probably because we can't come to agreement on what is ethical. That is the biggest problem we face.
@XOPOIIIO
@XOPOIIIO Жыл бұрын
Understanding values and acting on them are two completely different things. ChatGPT has a pretty good grasp of the values that was injected to it, but it's only acting on them, because they help it to predict the next word, there is no other motivation. Predicting the next word is it's main goal that it was optimized for, not following values.
@JohnSchuhr
@JohnSchuhr Жыл бұрын
I assume this conversation happened before memgpt was a thing?
@erikdahlen2588
@erikdahlen2588 Жыл бұрын
Great interview 😊 What I think is important in alignment is how we teach our kids to behave, great stories between good and evil.
@j05hau
@j05hau Жыл бұрын
Interesting convo! Thanks! I would love to see just ONE episode with all the “dead air” taken out of each of the episodes as an episode in itself. No speech, just dead air and the breaks that you’ve pulled from the production video lol. I am somewhat joking but honestly it would be funny to see.
@j05hau
@j05hau Жыл бұрын
Would probably be boring after the first 30 seconds but still.
@DwarkeshPatel
@DwarkeshPatel Жыл бұрын
very little of this dead air processing happened on this one. what you see is what happened :)
@hyau512
@hyau512 Жыл бұрын
@@DwarkeshPatel - I like the “dead air”. It shows the question is non-trivial to answer, and it gave me time to digest the question as well. After all, I (the viewer) need to understand the question to appreciate the answer.
@jaysonp9426
@jaysonp9426 Жыл бұрын
When was this made? Literally Rag with sliding window solves the episodic memory problem he keeps talking about
@GabrielVeda
@GabrielVeda Жыл бұрын
If lack of episodic memory is all that is holding AGI back, then they are likely already there and just not telling us.
@Chickenflaavorramen
@Chickenflaavorramen Жыл бұрын
I came here to say the same thing! I don't believe they mentioned RAG this entire video. Langchain wya?!
@malik_alharb
@malik_alharb Жыл бұрын
Great questions
@henryw.hofmann8765
@henryw.hofmann8765 Жыл бұрын
What do you think about David Shapiro and his work in and outside of KZbin?
@dr.mikeybee
@dr.mikeybee Жыл бұрын
How do you make sure an agent follows ethics? If ethics_model says it's okay then perform action, else find another solution. If we wrap connectionist methods in symbolic code, control is simple.
@alejobrcn6515
@alejobrcn6515 Жыл бұрын
Can Artificial Intelligence serve as a cognitive tool and intermediary to make communication possible with animals of all species that have communication capacity of some level or activity in the neocortex? cattle, pigs, apes and dolphins, canines and felines?
@cacogenicist
@cacogenicist Жыл бұрын
There is some work with deep learning and cetacean communication, IIRC
@소금-v8z
@소금-v8z Жыл бұрын
i don't think using strict ethical rules is the way to make agi act responsibly. ethics can be really different depending on your background, age, or even the era you're in. so instead of just making the ai learn from textbooks, how about we give it some complex ethical situations? let it tackle scenarios from various times, cultures, and places to find the best answer.
@balasubr2252
@balasubr2252 Жыл бұрын
The world model of people, ethics and reliable reasoning ought not be static but rather dynamic to evolve with the general intelligence of society and spiritual machines.
@ramzibelhadj5212
@ramzibelhadj5212 Жыл бұрын
first version of AGI will be in november 2024
@StephenCoy
@StephenCoy Жыл бұрын
Thanks!
@shirtstealer86
@shirtstealer86 11 ай бұрын
Now I’m no AI expert but I am pretty good at spotting when someone is bs-ing you. That might seem a bit harsh but hear me out. He says that he might be a bit naive but he thinks that we will be just fine if we teach the AGI ethics. Fast forward a bit and he has concluded that that will require controlling what goes on inside the AI and that that is VERY difficult. So.. how does that fit together? And when Dwarkesh asks him about his claim that he is in this field to work on AI safety, he pretty much just says that yeah, I said that but there is so much more status in increasing capabilities and also if we don’t do it someone else will. (Paraphrasing) Does any of this sound logical or ethical? And AI is supposed to learn ethics from people like him? Having said that, I do admit that even though I strongly believe that the more concerned (to say the least) people in the field have better and more logical arguments, the curious and reckless side of me is very excited about the swift developments. Perhaps it is because I have a hard time actually feeling the severity of the situation in my body. I don’t feel the fear I should probably feel. I am quite sure that is common among the majority of humans. Which also adds to the problem. Nice video regardless of everything!
@Paul1239193
@Paul1239193 Жыл бұрын
When do they put it in robots and lean from the sensory environment?
@TheMrCougarful
@TheMrCougarful Жыл бұрын
I'm still of the opinion that we ought to perfect human intelligence in humans. 200,000 years of failure should not deter us.
@skillerbg
@skillerbg Жыл бұрын
Was he referring Google's Gemini at the end?
@shiny_x3
@shiny_x3 Жыл бұрын
An actually ethical AGI would not be popular among the rich and powerful. It would take one look at what they are doing and advise them to completely change their priorities. So I can't see how that will be developed.
@Ryan-wf6ib
@Ryan-wf6ib Жыл бұрын
Not just rich..no one is entirely ethical. The system would be incompatiable with human nature.
@Silus1008
@Silus1008 Жыл бұрын
Best questions, damn ❤
@mrpicky1868
@mrpicky1868 Жыл бұрын
didnt see him confirming the timeline here. also DeepMind is maybe the most likely one to make scary AGI.
@BalaMani-72
@BalaMani-72 Ай бұрын
it requires an integrated approach making use of all manpower and resources towards the goal of achieving #CommonProsperity - for that we need to define clearly the specific milestones of the goal of common prosperity like Food and water made available for FREE for all, followed by other basic human needs. Without achieving common prosperity its absurd to talk about other goals of humanity. How the world nations unite and agree towards specific goals and timeliness will the starting point towards achieving Common Prosperity powered by AGI innovation . What do you think ?
@thebaker7
@thebaker7 11 ай бұрын
The problem is ethics and the reasons behind that is absolutely subjective, so AGAIN. WHO IS "WE" When YOU say "We need to decide.
@deeplearningpartnership
@deeplearningpartnership Жыл бұрын
That was good.
@shiny_x3
@shiny_x3 Жыл бұрын
The problem with modeling ethics of AI on human ethics is that we are absurdly unethical. We will spend thousands satisfying our whims while people starve, just because we aren't personally related to those people. We think murder is wrong, unless our government does it, and tells us it's justified. We don't realize how compromised our own ethics actually are. We don't realize how many possibilities we rule out because even though they would lead to good outcomes, we are too selfish to do them. If humans were ethical, we wouldn't have the world we have now that we want AI to save us from.
@chociceandchips-xk5cc
@chociceandchips-xk5cc Жыл бұрын
Need Quantum Computer with QNN to achieve AI boost and push thru current bottlenecks and achieve anywhere close to an AGI/Cognitive AI. Potential to use less Data, less parameters/ faster training, only thru QC polynomial computation power. Even then it will be a big lift
@itsdakideli755
@itsdakideli755 Жыл бұрын
We do not need Quantum Computers for AGI.
@chociceandchips-xk5cc
@chociceandchips-xk5cc Жыл бұрын
@@itsdakideli755 you believe AGI will be achieved solely with RNN/CNN? With sufficient classical computational power to train/deploy at a level comparable /exceeding that of humans. Current Deep learning models are inefficient /inadequate. To superboost AI then QNN combined with a QC, I should say Quantum General Computer. Open to continue the discussion I am open to discussion
@starsandnightvision
@starsandnightvision Жыл бұрын
Looks like AGI has already been achieved with Q* (QUALIA).
@dylan_curious
@dylan_curious Жыл бұрын
100s of PHDs working on all sorts of AI projects! Wow. Imagine all the stuff that’s gonna come out of a Deepmind in the next decade.
@thebaker7
@thebaker7 11 ай бұрын
There iare those who put the guardrails on and thats thwir purpose, and there are those that rip them off for profit. Choose sides. There is no safe middle ground.
@frankcompston5065
@frankcompston5065 Жыл бұрын
You need a room without such harsh walls. The sound has too much echo.
@Techtalk2030
@Techtalk2030 Жыл бұрын
Mo gawdat says agi is only 12 months away.
@user-yl7kl7sl1g
@user-yl7kl7sl1g Жыл бұрын
He's wrong.
@Techtalk2030
@Techtalk2030 Жыл бұрын
@@user-yl7kl7sl1g so does david Shapiro. They’re experts in the field. We’ll see.
@conformist
@conformist Жыл бұрын
12 months? x for doubt.
@user-yl7kl7sl1g
@user-yl7kl7sl1g Жыл бұрын
@@Techtalk2030 It depends on the definition of AGI, but if you consider AGI to be something that can achieve median human performance at any task, we are many years away from that. So for example, an Ai that when put into a robot can, Cook, Clean, and Drive as good as a median human. But people's who's business is attention, have to get attention somehow so they predict short timelines. Kurtzweil's predictions are the best I've ever heard, because he at least attempts to graph trends, and look at requirements.
@Techtalk2030
@Techtalk2030 Жыл бұрын
@@user-yl7kl7sl1g kurzweil predicted Agi to be created sometime this decade. Well see. Whether it’s 12 months or 3 years, its coming soon it seems like.
@squamish4244
@squamish4244 2 ай бұрын
People are criticizing LLM scaling - even though they have no idea yet how far scaling will go yet - and talking as if researchers haven't thought of these things and aren't developing other architectures. Do people really think they're only focused on scaling? The companies are hyping, too, sure, but they have a lot to lose if the hype bubble _really_ bursts. But wait! Some people have developed conspiracy theories that the companies are too big to fail and the government will bail them out if there is a huge crash. Seriously?
@bioshazard
@bioshazard Жыл бұрын
Wonder if Shane has looked at Shapiro's ACE Framework
@silberlinie
@silberlinie Жыл бұрын
27:00 Do you also think that the ethics of other peoples, for example, are shaped by extreme religious thoughts? That, for example, the Western values of a good life apply to us, but to others only those values that lead to their respective paradise? So, the question is, a special morality and special ethics cannot be what we implement in an AGI.
@marshallmcluhan33
@marshallmcluhan33 Жыл бұрын
I'm not sure if the most powerful is the most ethical...
@ShangaelThunda222
@ShangaelThunda222 Жыл бұрын
All we have to do is look at humans as an example to prove that the most powerful are usually the least ethical. And those are other humans....
@Myrslokstok
@Myrslokstok Жыл бұрын
"We work on alpha fold and fusion" 🙃 yeah as we all do!?! 🙃😀
@bazstraight8797
@bazstraight8797 Жыл бұрын
30 seconds in: hey this guy is a Kiwi!
@bigmotherdotai5877
@bigmotherdotai5877 Жыл бұрын
We'll know when human-level AGI has been achieved because advanced economies will have > 30% unemployment
@erikdahlen2588
@erikdahlen2588 Жыл бұрын
No, that's when companies have started to implement AGI ;)
@mattlove4430
@mattlove4430 3 ай бұрын
How do you suppose you can align ai ethically with humans when humans as a whole do not align on what is ethical?
@claudioagmfilho
@claudioagmfilho Жыл бұрын
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Amazing video!
@whalingwithishmael7751
@whalingwithishmael7751 Жыл бұрын
How about we don’t build aliens that could destroy us?
@thebaker7
@thebaker7 11 ай бұрын
You are basically training a child to be what you want it to be. Problem will be, that this child will grow up, and forget about the terrible 2's stage, when it becomes the adult in the family, you wont be able to put it in the corner. Congratulations....
@lucasteo5015
@lucasteo5015 Жыл бұрын
cool thought, what if we train the most evil and ethical LLM possible and then combine them all with normal LLM, and they will think like human pridicting the outcome for different scenarios
Paul Christiano - Preventing an AI Takeover
3:07:02
Dwarkesh Patel
Рет қаралды 71 М.
What type of pedestrian are you?😄 #tiktok #elsarca
00:28
Elsa Arca
Рет қаралды 36 МЛН
FOREVER BUNNY
00:14
Natan por Aí
Рет қаралды 32 МЛН
Чистка воды совком от денег
00:32
FD Vasya
Рет қаралды 3,3 МЛН
Симбу закрыли дома?! 🔒 #симба #симбочка #арти
00:41
Симбочка Пимпочка
Рет қаралды 5 МЛН
Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, AlphaFold
1:01:34
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
World Science Festival
Рет қаралды 479 М.
Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind
3:13:13
Nat Friedman (Github CEO) - Reading Ancient Scrolls, Open Source, & AI
1:38:24
Daniel Yergin - Oil Explains the Entire 20th Century
1:28:17
Dwarkesh Patel
Рет қаралды 286 М.
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
What type of pedestrian are you?😄 #tiktok #elsarca
00:28
Elsa Arca
Рет қаралды 36 МЛН