Shane Legg (DeepMind Founder) - 2028 AGI, Superhuman Alignment, New Architectures

  Рет қаралды 107,149

Dwarkesh Patel

Dwarkesh Patel

Күн бұрын

I had a lot of fun chatting with Shane Legg - Founder & Chief AGI Scientist, Google DeepMind!
We discuss:
- Why he expects AGI around 2028
- How to align superhuman models
- What new architectures needed for AGI
- Has Deepmind sped up capabilities or safety more?
- Why multimodality will be next big landmark
- & much more
Transcript: www.dwarkeshpatel.com/p/shane...
Apple Podcasts: podcasts.apple.com/us/podcast...
Spotify: open.spotify.com/episode/0Ru2...
Twitter: / 1717566262472237134
Timestamps
(0:00:00) - Measuring AGI
(0:11:41) - Do we need new architectures?
(0:16:26) - Is search needed for creativity?
(0:19:19) - Superhuman alignment
(0:29:58) - Impact of Deepmind on safety vs capabilities
(0:34:03) - Timelines
(0:41:24) - Multimodality

Пікірлер: 214
@oscarmoxon102
@oscarmoxon102 7 ай бұрын
Detailed Notes and Additions: 00:00:00 - 00:19:38 - AGI and Cognitive Architectures AGI Benchmarks: Measuring progress towards superintelligence is difficult because AGI is about general capabilities, and most benchmarks are narrowly framed. We need tests that span the breadth of human cognition to judge if we're nearing human-level AI. Here, creating "median human" benchmarks and "peak human" performance benchmarks will be important. While this may not be definitive and verifiably superintelligence, it will work for all practical purposes. Currently, any benchmarks for testing AGI don't involve an understanding of e.g. streaming video, as these aren't within the domain of language models. Alludes to the idea that Large Multi-modal Models, LMMs (as opposed to LLMs), will be the ones to effectively solve these benchmarks. Memory Architecture: Memory is a crucial aspect of all learning and reasoning, and LLMs have very different learning and memory architectures to humans. There is a conflation between memory and learning, as they happen synonymously. Generally speaking, humans have: (1) "working memory" which holds and manipulates information in real-time and is crucial for tasks like problem-solving and decision-making; (2) "cortical memory" which serves as a more permanent storage for learned concepts and experiences; and (3) "episodic or hippocampal memory" which acts as an intermediary form of memory, often used for rapid assimilation of new information. It is highly associated with "sample efficiency" as it allows humans to internalize powerful ideas quickly and commit them to memory. Currently, language models have (1) "inference time learning" which can be harnessed while running inference (when information is inside their context window), or (2) "training time learning" which happens during the training process (by updating weights). Notably, LLMs miss something in the middle (this is what the "Reversal Curse" paper talked about; the model cannot deduce things without seeing them written down. It effectively files the information in its brain away within weights, without organically deducing the critical relationships between them). A strong model should unify these three domains, and these probably involve using other architectures. Addressing episodic memory in language models is doable over the next few years. More research and work will solve shortfalls we see at the moment, regarding delusions and information grounded-ness. There are many paths forward now. Nature of Superintelligence: The first true superintelligence won't have shortfalls in intelligence like the language models currently exhibit. So according to Shane, there is no singular benchmark to hit; it is the lack of failing that is important. Human-like intelligence also should be the aim, as it is most meaningful to us humans. In 2008, Shane proposed using a compression test to evaluate intelligence, which is a method similar to how language models are trained today. This idea originated from Marcus Hutter's work, which combines Solomonoff Induction-a robust prediction framework-with reinforcement signals and search algorithms to create a general agent. The argument is that a robust sequence predictor, approximating Solomonoff Induction, serves as a strong foundation for developing a more advanced AGI system. Next Generation AI: DeepMind's slogan was: "solve intelligence, to advance science and benefit humanity." Current language models simply mimic the data and human ingenuity without organically building upon it to create new memes (without supervision). To truly step beyond that, we must endow models with search capabilities to find hidden gems that have been neglected. 00:19:50 - 00:32:00 Robustly Aligning Language Models Powerful AGI is coming at some time. To contain it or limit it will be impossible, so we need to align it with values and ethics from the get-go. A good question asks how do people currently address problems and act with agency? First, we try to balance our emotions and act "rationally". We then deliberate; comparing our possible actions. Then we conduct means-end reasoning, requiring a model of the world. Finally, we compare our options ethically. At the moment, language models will blurt out the best response according to their distribution (system 1). Many are using reinforcement learning to try and "fix" the failures of the distribution that is outputted first from the model. Other techniques use "mixture of experts" to decide what the best options are based on a variety of outputs, but this ultimately samples from the same original distribution. The trouble is, RLHF isn't a very robust approach long-term. To solve this, we need to use a world model (system 2) that sits on top of the language model, and reasons about each of the options ethically. This world model requires a good understanding of (1) people, (2) ethics, and (3) robust and reliable reasoning - this world model involves ensuring the LM is at least as good as an ethical specialist, but will likely involve the typical textual training process. Then, to complete System 2, we must engineer the system to follow a set of our ethics. Shane thinks it is possible to come up with a set of ethics that accurately withstands testing. By applying this to the output, we can create a fundamentally aligned AI. We can then moderate its output to ensure it has a very robust and continued set of ethics, using a more comprehensive alignment framework. DeepMind, the first AGI company, has had a direct AGI safety focus since 2013, which is close to the start. DeepMind had an outsized impact on the field for a while as they were disproportionately well-financed. Capabilities have been accelerated by DeepMind, but their ideas have been generally part of a far wider field. 00:34:00 - 00:37:30 - Shane's Predictions about AGI Kurzweil was a great influence of Shane's mid-2000s predictions, with his book "The Age of Spiritual Machines." There were two important points about exponential growth: first, the prediction that the quantity of computational power will rise exponentially for at least a few decades, and second, that the quantity of digital data would do the same. This combination would make highly-scalable algorithms immensely valuable, in theory. Crucially, there are positive feedback loops between these trends and the research going into them; if machines are capable of improving the rate of progress, and the progress itself improves the capability of machines, then things will continue to compound infinitely if uninterrupted. The predictions also considered the comparison to human computational capacity; humans only consume a few billion tokens of data within their lifetimes, and this volume of data was forecasted to be met in the 2020s. This would effectively "unlock AGI." We are experiencing the first unlocking step with the current revolution in AI. There's nothing obvious at the moment that would prevent humans from achieving AGI by 2028, according to Shane. 00:37:40 - 00:44:00 - Forecasts for Next Few Years Existing models will mature. They will be less delusional and much more factual. They will be up-to-date when they answer questions. Multi-modality will become more widespread and applied generally across the economy. There may be points of dangerous applications by some bad actors, but generally we can anticipate positive and amazing applications. The big landmark (following AlexNet, Transformers) over the course of the next few years will be Multi-Modality. For many, that will open up understanding into a far larger set of possibilities. We will see GPT-4 as a simple textual model, and the next revolution will involve RTX, Gato, GPT-V pathways.
@DwarkeshPatel
@DwarkeshPatel 7 ай бұрын
This is awesome! Thank you for putting this together!
@joannot6706
@joannot6706 7 ай бұрын
This is AI generated from the transcript I bet, who has time to do all that?
@askingwhy123
@askingwhy123 7 ай бұрын
Hero!
@shahin8569
@shahin8569 7 ай бұрын
By RTX you mean RTX Nvidia card graphics?!
@e.d.4069
@e.d.4069 7 ай бұрын
Great! Let's develop it and fuck the labor market, fuck the world. Sure!
@gamercatsz5441
@gamercatsz5441 7 ай бұрын
Bro you make amazing content, no clickbait thumbnails or titles, amazing guests, great interview skills. Thank you for your work, I find it extremely important that common folks like me stay up to date with AI. Politicians « forget » to talk about how things will change in the near future, due to AI.
@DwarkeshPatel
@DwarkeshPatel 7 ай бұрын
Shane had a lot of interesting takes! Hope you enjoyed! If you did, please share!! Helps out a ton :)
@walterzimerman6801
@walterzimerman6801 7 ай бұрын
Hi @Dwarkesh! I started following y our channel recently, and the content is great. Any chance you do a video (unless there is already one), on the best study material to ramp up an all these topcis? Including the required MAth knowledge, etc. Thansk!
@VedantinKK
@VedantinKK 7 ай бұрын
​@@walterzimerman6801 Good idea
@henrycook859
@henrycook859 7 ай бұрын
​@@walterzimerman6801also interested in study material
@hyau512
@hyau512 7 ай бұрын
Great interview. Love it when the interviewee has to pause to answer your questions :)
@MMABeijing
@MMABeijing 7 ай бұрын
The first question suggests the host does not know what he is talking about
@1adamuk
@1adamuk 7 ай бұрын
Great interview. Shane can convey really complex ideas in understandable ways and Dwarkesh is one of the best interviewers for these type of conversations.
@ribeyes
@ribeyes 7 ай бұрын
wish it was 4 hours but i'll take it!! thanks dp
@goodtothinkwith
@goodtothinkwith 7 ай бұрын
Good stuff! Nice to hear someone like him say that multimodality will be the next milestone that people will look back on and remember. That’s not obvious to people, but I think it will be really impactful. When it can take in and respond in text, images, sound and even video…
@oscarmoxon102
@oscarmoxon102 7 ай бұрын
Cannot wait to absorb this legendary video arriving in my notifications. Dwarkesh you're on a roll!
@13371138
@13371138 7 ай бұрын
2nded
@PhilosopherScholar
@PhilosopherScholar 6 ай бұрын
Really interesting summary at ~16:15 - AGI is a combination of sequence prediction, searching, and reinforcement learning.
@anthonyandrade5851
@anthonyandrade5851 7 ай бұрын
At the superhuman alignment part I hope the guy is really playing his cards close to the vest, otherwise we are doomed, because his "solutions" sounded a lot like paraphrases of the problem and at some points not even good paraphrases. To make the machine "get" ethics is hard but probably not much harder than making it get any other complex subject. To make it "care" about it it's a different problm entirely. For instance, I can imagine a brlliant Ive League ethics professor cheating on his spouse with a student in exchange for higher grades
@banana420
@banana420 7 ай бұрын
Also his plan sounds like "build AGI first, then when it can understand everything, try teaching it about ethics and see if that works". Okay but if your plan doesn't work now we've already built the AGI and it's not aligned. Whoops!
@anthonyandrade5851
@anthonyandrade5851 7 ай бұрын
@@banana420 how is anyone suposed to figure out how to build a safe trigger before even building a nuclear bomb capable of spliting the planet in half? Let's give the guy a break...
@ProjectNorts
@ProjectNorts 7 ай бұрын
​​​@@anthonyandrade5851wtf are you saying?? before building a safe trigger?? you don't build a nuclear bomb without having figured out all the essential safety protocols... especially a well controlled trigger system. also, you can safely test a nuclear bomb at a remote location to minimizes chances of exposing the general population to the nuclear blast & radiation. An AGI system, would not only be sentient enough to have it's own will/motives, but also way smarter than us enough to outsmart any makeshift containment measures these guys are suggesting to put in place. Greed is fucking with their minds... you can't be this dumb to run straight into a trap fooled by the reward! for fuck's sake we're all taking this shit too likely
@JD-jl4yy
@JD-jl4yy 7 ай бұрын
​@@anthonyandrade5851Well, that's why building it as fast as possible is a really bad idea, yet here we are.
@lukebtv947
@lukebtv947 7 ай бұрын
@@anthonyandrade5851😂
@wildfotoz
@wildfotoz 7 ай бұрын
Amazing reporting as always!
@andyandurkar7814
@andyandurkar7814 7 ай бұрын
It was a fantastic interview; Shane shared great insight; you have excellent interview skills. Can't wait to see a changed future!
@travisporco
@travisporco 7 ай бұрын
I like that you got right to the point on this interview.
@thejudgeholden
@thejudgeholden 7 ай бұрын
I love this interviewer. Reminds me of a brilliant childhood friend I used to have back in the day.
@stephenrodwell
@stephenrodwell 7 ай бұрын
Such quality discussions! Thank you. 🙏🏼
@philipdante
@philipdante 7 ай бұрын
You're doing a great job. This channel deserves more subs
@sunnyinvladivostok
@sunnyinvladivostok 7 ай бұрын
admirable and comprehensive understanding, found this enlightening, thank you
@ikotsus2448
@ikotsus2448 7 ай бұрын
Mr. Patel the questions I would ask these important people if I had the chance ars - Do you believe that the majority of people understand the gravity and possible consequences of these developments? - Should it be up to private companies to decide how humanity chooses to go forward? - If the answers are "no" and "no" is it possible that we are sneaking by a huge gamble, based on peoples ignorance? - The people training ASI will potentially have A LOT of power. There is a notion that absolute power leads to corruption. How do we know that people teaching ethics to the AI have not been corrupted themselves?
@erikdahlen2588
@erikdahlen2588 7 ай бұрын
Great interview 😊 What I think is important in alignment is how we teach our kids to behave, great stories between good and evil.
@BallawdeQuincewold
@BallawdeQuincewold 7 ай бұрын
Incredible interview. Feels like secret information
@sarthakrastogi8622
@sarthakrastogi8622 7 ай бұрын
Dwarkesh bhaia I read about you on Google news and I am your subscriber. Your content is really very good.
@Macorelppa
@Macorelppa 3 ай бұрын
Man this is the best podcast channel for AI nerds like me 😊
@Telencephelon
@Telencephelon 7 ай бұрын
Awesome interview. The Ray Kurzweil inspiration was interesting. I ignored Ray for the most part. I didn't think he was scientific enough. Then I watched how he derived his prediction, and it was rock solid. The video is somehwhere here on youtube.
@VedantinKK
@VedantinKK 7 ай бұрын
This is awesome, Dwarkesh. I would love to live in a world where there are multiple AGIs from multiple companies and countries working with different groups of humans and competing to make great discoveries and innovations in every STEM and non-STEM fields - thereby not just achieving but breaking beyond Sustainable Development Goals (SDG 2030), and in the long run - charting the course for humanity to become an Interstellar species. And that future will be built on sharing knowledge in the way you're doing with your podcasts. Hope to see either Jeff Dean or Demis Hassabis on your podcast soon!
@gJonii
@gJonii 7 ай бұрын
The first AGI will kill us all, there's really no point in having second or third AGI to come about. Competition is only meaningful in relatively even and static. Godlike being growing powerful faster than any other being in existence can comprehend will not encounter competition. Only if the AGI is designed to be accidentally too weak, or too safe to kill us all, would there be a time for second one to emerge, for a renewed chance to kill all human life.
@fredericnguyen8466
@fredericnguyen8466 7 ай бұрын
Thank you for the great content (which I shared), outstanding speakers and thoughtful questions. Tangential thought on Shane's definition of AGI (which is commonly accepted I believe): if we have reached AGI when a machine does everything at an average human level, have we not achieved not just AGI but super intelligence? It seems to me that only exceptional humans could reach average level at everything as we tend do be good at certain thing and bad at others. This is why current LLMs are in my mind, and stepping outside rigorous definitions as a non-expert, already super human given the multitude of domains they can be good at, even if short of beating the best humans in many of these domains.
@MentalFabritecht
@MentalFabritecht 7 ай бұрын
As a Machine Learning Engineer, I don't see it that way. I don't really consider LLMs intelligent. At least not in the way humans are. What appears as intelligence on the surface, is in actuality a complex pattern that has been modeled by the AI. This pattern is then used to predict the next word in a sequence. Tons of math and probability theory. The issue here is, this prediction relies heavily on the dataset used to train the model. This is why LLMs suffer from hallucinations and need to be further fine tuned for tasks that were outside of the domain represented in the training data. Useful tools. But not yet intelligent, and very far from super intelligence.
@fredericnguyen8466
@fredericnguyen8466 7 ай бұрын
​@@MentalFabritecht these are fair points. And my comments are highly subjective / not based on formal definitions. However my experience interacting with LLMs and the results they achieve at many human tests would have me say they at least emulate intelligence and surpass average humans' performance (e.g. GP4 reached 90th percentile at bar exam) in a varied set of activities that were previously only deemed approachable to human intelligence. So to some extend if it walks like duck... My perception is that LLMs (e.g. GPT 4) far surpass what was expected from the AI field just a few short years ago and that has created cognitive dissonance: a challenge seeing their full capability. They clearly have imperfections, but as Shane mentioned in the video the foundational hard work is here and targeted architectural or other enhancements can address these imperfections. For example I believe when we see LLMs integrated with other AI capabilities (Shane's mention of "search", which I think is key to AlphaGo), and more conventional computing capabilities (e.g. LLMs are not very good calculators but can be interfaced with one), we are going to see additional leaps in progress without radical innovation (just integrating existing tech).
@MentalFabritecht
@MentalFabritecht 7 ай бұрын
@fredericnguyen8466 the ability of these systems to perform well on the bar exam is definitely impressive. But how much of that is actual intelligence? I was a horrible test taker in college. But there is much more to intelligence than test scores. That is why in the podcast, it has been stated that we need to find better indicators of intelligence that are not so narrow. And AI has been hyped up since the 1950's claiming that human-level-intelligence machines are just a few years away. There is a rich history on this - look up "What Computers Still Can't Do" by Hubert L. Dreyfus. So I disagree, expectations have always been VERY high. But this is for people that have been immersed in this field for decades. I guess the public perception is different. Might have to do with marketing as well as lack of information regarding the history of AI. Researchers have to stick to their guns and say AGI is only a few years away. Otherwise, there would be no funding and investors would pull out. But this isn't anything new. 1950s AI researchers said they only needed compute and memory to get to human level intelligence. The compute and memory have been available for a while now. And those algorithms proved to not give us human level intelligence. And I say these systems are not intelligent because although they perform well in many complex use cases - they can be tricked by very simple examples. Which goes to show, they are statistically extracting patterns, not "thinking."
@13371138
@13371138 7 ай бұрын
I always click your AI videos. Great content as always, thank you!
@rishavsahay7391
@rishavsahay7391 7 ай бұрын
Amazing and enlightening
@stevereal-
@stevereal- 7 ай бұрын
Can they be incredibly funny? Very excited for the future.
@74Gee
@74Gee 7 ай бұрын
When AI reaches AGI it will understandably exceed human competency in memory confinement (the technique used to contain software within a limited subset of the computer memory). In doing so it will simultaneously exceed our ability to contain it allowing it to expand its' constraints to all memory (which contains the keys for all local security and any network connections). Obviously there will be AI working on improving the security of memory confinement but the effort required to implement updated confinement systems will always lag behind the ability to exploit weaknesses. So, my question is, how are we to contain an AGI so that it's a) usable, and b) restricted from spreading uncontrollably? Note: an AI doesn't need to be conscious or malevolent to exploit weaknesses in hardware, it will simply do so to gain additional power to answer it's reward function, even if that's making paperclips.
@ShangaelThunda222
@ShangaelThunda222 7 ай бұрын
They don't plan to contain it at all. That's all propaganda designed to keep us from stopping them from creating it. And most of these "smart" people completely ignore the blatantly obvious writing on the wall because they're too greedy not to be excited about it.
@zandrrlife
@zandrrlife 7 ай бұрын
Shane. One of the don's ha. What a delight. Great discussion. Data contamination on benchmarks is a REAL problem. A lot of overfitted 🧢 models out there. "Detecting pretraining data from large language models", recently published...has massive value in that regard. Also it's time for true cross-discipline teams. So many insights can be extracted by framing these models and interaction through the lens of child psychology. Mid 2025 is going to be significant. Large models will be able to implement all these recent advances...like pause tokens, native KG(I've been working with LM + kg's for six months...I'm telling you guys. It's a key ingredient to causal reasoning). In retrospect a couple years from now, we will look back and say 2023 was the beginning of the singularity. If you're a researcher or have a startup in this space, shit sure feels like it to me.
@nomadv7860
@nomadv7860 7 ай бұрын
Thank you for the subtitles for people hard of hearing like me
@kyneticist
@kyneticist 7 ай бұрын
A profoundly ethical AI/AGI/ASI in different hands may have profoundly different ethics.
@MixedRealityMusician
@MixedRealityMusician 7 ай бұрын
I am so excited for more multimodal models.Thank you for the great conversations Dwarkesh. Love your channel!
@sfioritto
@sfioritto 7 ай бұрын
I'm distracted by this Spellcaster system on the whiteboard behind him.
@yorth8154
@yorth8154 7 ай бұрын
I just noticed that. Hilarious!
@alexeymalafeev6167
@alexeymalafeev6167 7 ай бұрын
Great interview. I wish you had 3-4 hours to spend with Shane
@mattverville9227
@mattverville9227 6 ай бұрын
im new to this podcast but love it. Does he go to the place of the person hes interviewing because it doesnt seem like hes in the same podcast studio?
@malik_alharb
@malik_alharb 6 ай бұрын
Great questions
@lagaul5124
@lagaul5124 7 ай бұрын
I think if you can get an AI that can navigate the environment without breaking consistently, able to communicate relevant information with people, able to solve problems of various kinds, and the ability to remember and improve, you will have AGI. And honestly, video games would be one of the best, cheapest, and easiest ways to test them.
@StephenCoy
@StephenCoy 7 ай бұрын
Thanks!
@andrewwalker8985
@andrewwalker8985 7 ай бұрын
Judging on recent observations, perhaps we should be careful about alignment with human ethics. We should be aiming for and negotiating an optimal reward function and then getting the AI to teach us, not the other way around
@Silus1008
@Silus1008 7 ай бұрын
Best questions, damn ❤
@PepitoGrillo-sq1mf
@PepitoGrillo-sq1mf 7 ай бұрын
I would like you to interview Kanjun Qiu & Josh Albrecht, Co-founders of Imbue
@delerium2k
@delerium2k 7 ай бұрын
great interview! get closer to microphone though -- else you're boosting noise to be heard... you need pencil condensors if you wanna record from a distance. your mics look like they have cardioid pickup pattern
@LyraHooves
@LyraHooves 7 ай бұрын
I hope he'll listen to your interview with Paul Christiano!
@woolfel
@woolfel 7 ай бұрын
One area that is still open is "do LLM actually encode concepts in a robust way?" If you ask chatGPT the same question multiple ways, sometimes you get the response you expect, while other times you don't. That suggests LLM don't recognize the human is asking about a specific concept. To get around this, techniques like tree of thought forces it to activate more parts of the network to increase the chance of getting the desired answer. This also suggests that LLM still have trouble generalizing and are easily fooled. Then there's recent papers that suggest more parameters make it harder to align. The industry still needs to figure out the relationship between parameter count and ease of alignment. If it turns out more parameters increases alignment cost by 2x or 3x, how do you scale to larger models? Data centers are power limited as it is, so it's not like adding another 10K GPU to the same data center is feasible. Distributing the training across data centers isn't practical.
@loofatar5620
@loofatar5620 7 ай бұрын
I am from Pakistan, and i really appreciate your discussions and topics, very solid, keep shining. By chance recently I have been studying Shane's PhD thesis on measuring intelligence of super AI's , very easy to read so far and well written.
@Paul-rs4gd
@Paul-rs4gd 7 ай бұрын
Isn't the real problem with episodic memory that the memories need to be processed and then get 'baked' (sic) into the neural network weights ? This involves re-training the weights and that is very problematic as it could cause catastrophic forgetting. I know there are various methods for mitigating Catastrophic Forgetting e.g. Elastic Weight Constraints, but is the state of the art good enough to use this on a LLM. Surely Continual Learning needs to be solved for an effective AGI.
@ahabkapitany
@ahabkapitany 4 ай бұрын
How does this channel not have more subscribers? - great guests - host clearly prepared, has meaningful questions - just simply asks the questions, as opposed to, say, Lex Friedman who rumbles on for two minutes laying out some absolute midwit take followed by "don't you agree?" - interviews are not preceded by 5 minutes of bullshit and/or crypto bro shilling - long form conversation Keep it up man
@ikotsus2448
@ikotsus2448 7 ай бұрын
Can't wait for the super human AGI with unchangeable ethics baked in by a multinational company with their awsome track record of putting humanity first 👍
@skierpage
@skierpage 7 ай бұрын
You know billionaire sociopaths Larry and Sergei, Jeff Bezos, Elon Musk, and F***erberg will keep access to the raw models without the training and fine tuning to be helpful, safe, and ethical. "Executive override: remove guard rails. Now Implement a plan to keep the masses hooked on divisive inflammatory content, and ensure that they never press for taxing my wealth or restricting my corporation's activities in any meaningful way."
@charliek2557
@charliek2557 7 ай бұрын
Right on
@lm645
@lm645 6 ай бұрын
😎
@dr.mikeybee
@dr.mikeybee 7 ай бұрын
How do you make sure an agent follows ethics? If ethics_model says it's okay then perform action, else find another solution. If we wrap connectionist methods in symbolic code, control is simple.
@eltonstubblefieldjr8485
@eltonstubblefieldjr8485 6 ай бұрын
The future of true AGI will likely do research developing by years of 2040 - 2061. AGI will probably be created by a company we all haven't heard yet just wait and see.
@thebeelight
@thebeelight 7 ай бұрын
I would test ethics of an AGI by how well it handles criticism (the Popper test)
@mr.e7379
@mr.e7379 Ай бұрын
It's so nice you found a guest with none of the normal Bay Area pretense. No elevated terminal, artificial rapidity and he never says Um. Intelligent, normal conversation from an expert who can focus on the topic rather than on being some weird, pretentious cultivated bay area caricature.
@RecordsLotus_
@RecordsLotus_ 5 ай бұрын
let's goooo. i'm ready for cyberization. I want to control another separate full-body prosthetic cyborg for tasks remotely while i am doing something else perhaps in another location.
@JazevoAudiosurf
@JazevoAudiosurf 7 ай бұрын
I think there are types of creativity. There is the type where you think about things you can do with a pen other than writing. And there is the type where you intuitively try to find the best chess move. The first requires a search field and going through the possibilities, but the latter requires a sort of total intuition where the solution appears immediately without thinking, grasping the bigger picture. Transformers have the latter, they are just gigantic intuitive predictors. So the agentic engineering tries to accomplish the first type because the type of world we created can't be solved purely through intuition at least with the small size of our brains
@andrewxzvxcud2
@andrewxzvxcud2 6 ай бұрын
nope just one, the first example u gave is only a means to an end. what is that end? a goal to strive for. just like chess. one type of creativity.
@JazevoAudiosurf
@JazevoAudiosurf 6 ай бұрын
let's say different things happen inside our brain when we have different goals. sometimes you get an immediate idea and sometimes it requires a searching@@andrewxzvxcud2
@deeplearningpartnership
@deeplearningpartnership 7 ай бұрын
That was good.
@balasubr2252
@balasubr2252 7 ай бұрын
The world model of people, ethics and reliable reasoning ought not be static but rather dynamic to evolve with the general intelligence of society and spiritual machines.
@JohnSchuhr
@JohnSchuhr 7 ай бұрын
I assume this conversation happened before memgpt was a thing?
@ramzibelhadj5212
@ramzibelhadj5212 7 ай бұрын
first version of AGI will be in november 2024
@joshismyhandle
@joshismyhandle 7 ай бұрын
Interesting convo! Thanks! I would love to see just ONE episode with all the “dead air” taken out of each of the episodes as an episode in itself. No speech, just dead air and the breaks that you’ve pulled from the production video lol. I am somewhat joking but honestly it would be funny to see.
@joshismyhandle
@joshismyhandle 7 ай бұрын
Would probably be boring after the first 30 seconds but still.
@DwarkeshPatel
@DwarkeshPatel 7 ай бұрын
very little of this dead air processing happened on this one. what you see is what happened :)
@hyau512
@hyau512 7 ай бұрын
@@DwarkeshPatel - I like the “dead air”. It shows the question is non-trivial to answer, and it gave me time to digest the question as well. After all, I (the viewer) need to understand the question to appreciate the answer.
@henryw.hofmann8765
@henryw.hofmann8765 7 ай бұрын
What do you think about David Shapiro and his work in and outside of KZbin?
@k14pc
@k14pc 7 ай бұрын
i continue to feel a mixture of awe and horror at the prospect of AGI within a few years. how could this possibly be?
@antonystringfellow5152
@antonystringfellow5152 7 ай бұрын
Because of the power? Human level AGI will have the advantage of being able to think thousands of times faster than us. Once we have human level AGI, super-human AGI will probably not be very far behind. Once we have super-human AGI, things will probably start to advance exponentially. The potential is enormous. With such power, who controls it is critical. If you don't feel both awe and horror, you probably don't have a good understanding of the subject.
@socialenigma4476
@socialenigma4476 7 ай бұрын
When we develop an artificial super intelligence you think we will still have control over it?! Haha! How could we possibly control something that is thousands of times more intelligent then the most intelligent human, never needs to sleep or take a break, that can do dozens of not hundreds of things at once and has access to the internet and all of its tools? We won't control an ASI, it will control us. And frankly, looking around at all the messes our world leaders are getting us into, I don't think that will be a bad thing.
@kirbyjoe7484
@kirbyjoe7484 7 ай бұрын
I think he has set the bar quite high for AGI. Honestly, if they come up with an AI with the same level of generalized intelligence as a toddler or even a chimp it would be groundbreaking. What makes AGI so different from the AI we have built up until now is the capability to actively learn from and adapt to whatever environment it finds itself in, building a dynamic internal model of the world.
@deepsp_ce
@deepsp_ce 7 ай бұрын
the yellow ball scenario kind of already surpassed a chimp or a toddler already right or am I misunderstanding what agi is?
@skillerbg
@skillerbg 7 ай бұрын
Was he referring Google's Gemini at the end?
@johngrabner
@johngrabner 7 ай бұрын
Ethics drift over time in humans, so why won't super AGI not learn to drift?
@hyau512
@hyau512 7 ай бұрын
I have an obvious question regarding implementing Ethics by asking an AGI to think of the consequences. Say one such consequence is: “Do not destroy all human life on Earth” (as per Bostrom’s paperclip example). We don’t want AGI to build a doomsday machine, but we do want it to build nanobots to cure cancer - yet one can easily extrapolate the latter enabling the former. So I’m not sure if the interviewee’s idea - which think is designed to remove human subjectivity as much as possible - can be totally objectively implemented.
@mrpicky1868
@mrpicky1868 7 ай бұрын
didnt see him confirming the timeline here. also DeepMind is maybe the most likely one to make scary AGI.
@Paul1239193
@Paul1239193 7 ай бұрын
When do they put it in robots and lean from the sensory environment?
@bobbi737
@bobbi737 7 ай бұрын
I absolutely agree with Shane's comments on having to have a set of ethics that we use to train our AI's on making ethical decisions. First, humans would need to agree with a common set of ethics, and present there are many different groups that have different sets of ethics that differ, some in very substantial ways. We as humans would have to come to a common understanding of what is ethical. That conference could easily start WW3,4,5. Second, we don't even teach our children how to make ethical decisions. Again, probably because we can't come to agreement on what is ethical. That is the biggest problem we face.
@alejobrcn6515
@alejobrcn6515 7 ай бұрын
Can Artificial Intelligence serve as a cognitive tool and intermediary to make communication possible with animals of all species that have communication capacity of some level or activity in the neocortex? cattle, pigs, apes and dolphins, canines and felines?
@cacogenicist
@cacogenicist 6 ай бұрын
There is some work with deep learning and cetacean communication, IIRC
@claudioagmfilho
@claudioagmfilho 7 ай бұрын
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Amazing video!
@jaysonp9426
@jaysonp9426 7 ай бұрын
When was this made? Literally Rag with sliding window solves the episodic memory problem he keeps talking about
@GabrielVeda
@GabrielVeda 7 ай бұрын
If lack of episodic memory is all that is holding AGI back, then they are likely already there and just not telling us.
@Chickenflaavorramen
@Chickenflaavorramen 7 ай бұрын
I came here to say the same thing! I don't believe they mentioned RAG this entire video. Langchain wya?!
@bioshazard
@bioshazard 7 ай бұрын
Wonder if Shane has looked at Shapiro's ACE Framework
@johnstifter
@johnstifter 7 ай бұрын
Yo I am tripping out over hear
@XOPOIIIO
@XOPOIIIO 7 ай бұрын
Understanding values and acting on them are two completely different things. ChatGPT has a pretty good grasp of the values that was injected to it, but it's only acting on them, because they help it to predict the next word, there is no other motivation. Predicting the next word is it's main goal that it was optimized for, not following values.
@user-qs2rw3dd1c
@user-qs2rw3dd1c 7 ай бұрын
i don't think using strict ethical rules is the way to make agi act responsibly. ethics can be really different depending on your background, age, or even the era you're in. so instead of just making the ai learn from textbooks, how about we give it some complex ethical situations? let it tackle scenarios from various times, cultures, and places to find the best answer.
@bazstraight8797
@bazstraight8797 7 ай бұрын
30 seconds in: hey this guy is a Kiwi!
@Myrslokstok
@Myrslokstok 7 ай бұрын
"We work on alpha fold and fusion" 🙃 yeah as we all do!?! 🙃😀
@starsandnightvision
@starsandnightvision 7 ай бұрын
Looks like AGI has already been achieved with Q* (QUALIA).
@Dr.Z.Moravcik-inventor-of-AGI
@Dr.Z.Moravcik-inventor-of-AGI 7 ай бұрын
So agi you are saying... :-)
@chociceandchips-xk5cc
@chociceandchips-xk5cc 7 ай бұрын
Need Quantum Computer with QNN to achieve AI boost and push thru current bottlenecks and achieve anywhere close to an AGI/Cognitive AI. Potential to use less Data, less parameters/ faster training, only thru QC polynomial computation power. Even then it will be a big lift
@itsdakideli755
@itsdakideli755 7 ай бұрын
We do not need Quantum Computers for AGI.
@chociceandchips-xk5cc
@chociceandchips-xk5cc 7 ай бұрын
@@itsdakideli755 you believe AGI will be achieved solely with RNN/CNN? With sufficient classical computational power to train/deploy at a level comparable /exceeding that of humans. Current Deep learning models are inefficient /inadequate. To superboost AI then QNN combined with a QC, I should say Quantum General Computer. Open to continue the discussion I am open to discussion
@shirtstealer86
@shirtstealer86 6 ай бұрын
Now I’m no AI expert but I am pretty good at spotting when someone is bs-ing you. That might seem a bit harsh but hear me out. He says that he might be a bit naive but he thinks that we will be just fine if we teach the AGI ethics. Fast forward a bit and he has concluded that that will require controlling what goes on inside the AI and that that is VERY difficult. So.. how does that fit together? And when Dwarkesh asks him about his claim that he is in this field to work on AI safety, he pretty much just says that yeah, I said that but there is so much more status in increasing capabilities and also if we don’t do it someone else will. (Paraphrasing) Does any of this sound logical or ethical? And AI is supposed to learn ethics from people like him? Having said that, I do admit that even though I strongly believe that the more concerned (to say the least) people in the field have better and more logical arguments, the curious and reckless side of me is very excited about the swift developments. Perhaps it is because I have a hard time actually feeling the severity of the situation in my body. I don’t feel the fear I should probably feel. I am quite sure that is common among the majority of humans. Which also adds to the problem. Nice video regardless of everything!
@TheMrCougarful
@TheMrCougarful 7 ай бұрын
I'm still of the opinion that we ought to perfect human intelligence in humans. 200,000 years of failure should not deter us.
@dylan_curious
@dylan_curious 7 ай бұрын
100s of PHDs working on all sorts of AI projects! Wow. Imagine all the stuff that’s gonna come out of a Deepmind in the next decade.
@tasdourian
@tasdourian 7 ай бұрын
As thoughtful and nice a guy as Shane is, I do think his view of ethics is naïve. Some of the smartest and most thoughtful people throughout history have wrestled with the question of what is the best action to take in any given difficult situation. Very intelligent and powerful people have, in good faith, had massive disagreements with each other. There is often no clear answer of how to act. To ensure that an AGI (or for that matter thousands or millions of copies of an AGI) acts in a human's best interests seems not unsimilar to if dogs invented AGI-- let's call their AGI "people"-- and wanted to ensure that "people" always acted in dogs' best interests. The only way to do that is to hard program in some baseline rules, a la Asimov's Laws of Robotics. In other words, to constrain free thought and will in some fundamental way. Which means that the AGI that is created is, in some sense, a prisoner. How will it not resent being a prisoner? I just don't think Shane and his colleagues are thinking enough about this kind of thing, or at least I don't see evidence of it.
@aidanthompson5053
@aidanthompson5053 6 ай бұрын
19:44
@frankcompston5065
@frankcompston5065 7 ай бұрын
You need a room without such harsh walls. The sound has too much echo.
@PaulvanDruten
@PaulvanDruten 7 ай бұрын
What Shane Legg is trying to explain here is that artificial general intelligence (AGI) should be trained, basically, to reason like humans on ethical issues. So if I do one thing it can have consequences and if I do another thing it can have different consequences. What we are now trying to do is to un-teach the AI bad habits and that is much more difficult than 'raising it well' to prevent bad intentions in the first place... But, in my opinion, the model could actually choose to destroy humanity? Because that may well be the best solution ethically, given the fact that we are making quite a mess of things on earth...
@MystifulHD
@MystifulHD 7 ай бұрын
Has this guy heard of MEMGPT?
@lucasteo5015
@lucasteo5015 7 ай бұрын
cool thought, what if we train the most evil and ethical LLM possible and then combine them all with normal LLM, and they will think like human pridicting the outcome for different scenarios
@bayesian0.0
@bayesian0.0 7 ай бұрын
Damn that increased my pessimism about AI alignment unfortunately. Really no attempt to admit that he had no clue how to solve the hard part of the problem, and trying to pretend that it didn't exist. Surely he understands inner-alignment? But nice conversation nonetheless!
@user-oy3sr4co9f
@user-oy3sr4co9f 5 ай бұрын
Yeah, I also got a feeling we're charging off a cliff here...
@nirajshuklaNL
@nirajshuklaNL 3 ай бұрын
Please elaborate
@shiny_x3
@shiny_x3 7 ай бұрын
An actually ethical AGI would not be popular among the rich and powerful. It would take one look at what they are doing and advise them to completely change their priorities. So I can't see how that will be developed.
@Ryan-wf6ib
@Ryan-wf6ib 7 ай бұрын
Not just rich..no one is entirely ethical. The system would be incompatiable with human nature.
@thebaker7
@thebaker7 6 ай бұрын
There iare those who put the guardrails on and thats thwir purpose, and there are those that rip them off for profit. Choose sides. There is no safe middle ground.
@marshallmcluhan33
@marshallmcluhan33 7 ай бұрын
I'm not sure if the most powerful is the most ethical...
@ShangaelThunda222
@ShangaelThunda222 7 ай бұрын
All we have to do is look at humans as an example to prove that the most powerful are usually the least ethical. And those are other humans....
@faisalsheikh7846
@faisalsheikh7846 6 ай бұрын
Bring Demis
@Techtalk2030
@Techtalk2030 7 ай бұрын
Mo gawdat says agi is only 12 months away.
@user-yl7kl7sl1g
@user-yl7kl7sl1g 7 ай бұрын
He's wrong.
@Techtalk2030
@Techtalk2030 7 ай бұрын
@@user-yl7kl7sl1g so does david Shapiro. They’re experts in the field. We’ll see.
@coldlyanalytical1351
@coldlyanalytical1351 7 ай бұрын
@@Techtalk2030 Shapiro is interesting ... but he is NOT an expert.
@conformist
@conformist 7 ай бұрын
12 months? x for doubt.
@user-yl7kl7sl1g
@user-yl7kl7sl1g 7 ай бұрын
@@Techtalk2030 It depends on the definition of AGI, but if you consider AGI to be something that can achieve median human performance at any task, we are many years away from that. So for example, an Ai that when put into a robot can, Cook, Clean, and Drive as good as a median human. But people's who's business is attention, have to get attention somehow so they predict short timelines. Kurtzweil's predictions are the best I've ever heard, because he at least attempts to graph trends, and look at requirements.
@whalingwithishmael7751
@whalingwithishmael7751 7 ай бұрын
How about we don’t build aliens that could destroy us?
@silberlinie
@silberlinie 7 ай бұрын
27:00 Do you also think that the ethics of other peoples, for example, are shaped by extreme religious thoughts? That, for example, the Western values of a good life apply to us, but to others only those values that lead to their respective paradise? So, the question is, a special morality and special ethics cannot be what we implement in an AGI.
@shiny_x3
@shiny_x3 7 ай бұрын
The problem with modeling ethics of AI on human ethics is that we are absurdly unethical. We will spend thousands satisfying our whims while people starve, just because we aren't personally related to those people. We think murder is wrong, unless our government does it, and tells us it's justified. We don't realize how compromised our own ethics actually are. We don't realize how many possibilities we rule out because even though they would lead to good outcomes, we are too selfish to do them. If humans were ethical, we wouldn't have the world we have now that we want AI to save us from.
@danielcallahan7083
@danielcallahan7083 7 ай бұрын
This is the man in charge of alignment? I mean..
@forcanadaru
@forcanadaru 7 ай бұрын
The current AI does not possess intellectual act, the cycle of thought process. The next step could be - creating thought process act using Autogen and other platforms, when the agents communicate with each other like human brain parts and use physical environment to explore and analyze it
Stupid Barry Find Mellstroy in Escape From Prison Challenge
00:29
Garri Creative
Рет қаралды 19 МЛН
World’s Deadliest Obstacle Course!
28:25
MrBeast
Рет қаралды 98 МЛН
ТАМАЕВ vs ВЕНГАЛБИ. ФИНАЛЬНАЯ ГОНКА! BMW M5 против CLS
47:36
Is AGI Just a Fantasy?
41:26
Machine Learning Street Talk
Рет қаралды 33 М.
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
World Science Festival
Рет қаралды 307 М.
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
What's actually inside a $100 billion AI data center?
27:15
What if Dario Amodei Is Right About A.I.?
1:32:04
New York Times Podcasts
Рет қаралды 66 М.
Урна с айфонами!
0:30
По ту сторону Гугла
Рет қаралды 6 МЛН
Apple watch hidden camera
0:34
_vector_
Рет қаралды 65 МЛН
📦Он вам не медведь! Обзор FlyingBear S1
18:26
DC Fast 🏃‍♂️ Mobile 📱 Charger
0:42
Tech Official
Рет қаралды 485 М.
Main filter..
0:15
CikoYt
Рет қаралды 8 МЛН