Paul Christiano - Preventing an AI Takeover

  Рет қаралды 65,571

Dwarkesh Patel

Dwarkesh Patel

Күн бұрын

Talked with Paul Christiano (world’s leading AI safety researcher) about:
- Does he regret inventing RLHF?
- What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?
- Why he has relatively modest timelines (40% by 2040, 15% by 2030),
- Why he’s leading the push to get to labs develop responsible scaling policies, & what it would take to prevent an AI coup or bioweapon,
- His current research into a new proof system, and how this could solve alignment by explaining model's behavior,
- and much more.
Open Philanthropy
Open Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations.
For more information and to apply, please see this application: www.openphilanthropy.org/rese...
The deadline to apply is November 9th; make sure to check out those roles before they close:
Transcript: www.dwarkeshpatel.com/p/paul-...
Apple Podcasts: podcasts.apple.com/us/podcast...
Spotify: open.spotify.com/episode/5vOu...
Follow me on Twitter: / dwarkesh_sp
Timestamps
(00:00:00) - What do we want post-AGI world to look like?
(00:24:25) - Timelines
(00:45:28) - Evolution vs gradient descent
(00:54:53) - Misalignment and takeover
(01:17:23) - Is alignment dual-use?
(01:31:38) - Responsible scaling policies
(01:58:25) - Paul’s alignment research
(02:35:01) - Will this revolutionize theoretical CS and math?
(02:46:11) - How Paul invented RLHF
(02:55:10) - Disagreements with Carl Shulman
(03:01:53) - Long TSMC but not NVIDIA

Пікірлер: 339
@david-fm3gv
@david-fm3gv 7 ай бұрын
It's super, super weird hearing extremely smart people confidently make such radical predication about the near future.
@Cagrst
@Cagrst 7 ай бұрын
Yeah this feels like a dream…
@Elintasokas
@Elintasokas 7 ай бұрын
Intelligence has never stopped people from being overconfident about things that are utterly unpredictable.
@oowaz
@oowaz 7 ай бұрын
this comment is so vague, is there a specific observation you're referring to? @david-fm3gv
@kyneticist
@kyneticist 7 ай бұрын
Context matters. They aren't just smart people, or random people offering opinions. These are people who have dedicated their lives to the study of the subject are deeply involved in the field and have worked through its evolution, and are the experts that other experts seek for advice.
@Elintasokas
@Elintasokas 7 ай бұрын
@@kyneticist Still, giving precise predictions such as 15% chance is just silly and meaningless. It's like predicting the economy; it's impossible due to too many unknown variables. No one, literally, no one, no matter how knowledge is able to predict the economy. This is more or less in the same camp.
@kimholder
@kimholder 7 ай бұрын
I often speed up videos to 1.25 x. I slowed this one down to 0.75x.
@ribeyes
@ribeyes 7 ай бұрын
honey, get the kids-- new dwarkesh just dropped!
@Gotchaaaaaa
@Gotchaaaaaa 7 ай бұрын
Get to the chopper!
@diga4696
@diga4696 7 ай бұрын
You are documenting an absolutely important for the future discussion. No matter if the future is dystopian or utopian, if there are still intelligent creatures that live in 2325 that have originated on planet Earth, they will be thankful for these records.
@jameswin7631
@jameswin7631 7 ай бұрын
Dwar going crazy with the content schedule 🔥👊😁
@rtnjo6936
@rtnjo6936 7 ай бұрын
3hrs with Paul and Dwarkesh, leeeeeeeeeettttttttsssss goo
@BestCosmologist
@BestCosmologist 7 ай бұрын
Most underrated podcast.
@aalluubbaa
@aalluubbaa 7 ай бұрын
It’s so mind blowing to see a guy who talks so constructively giving a prediction that there is a 40% chance of Dyson sphere being constructed in 2040. This is just so insane. The quick response most people would probably be like yeah right in your pipe dream. But we have to look at this objectively. There are really smart people who are given so much money and power and probably are really knowledgeable of what they talk about.
@osuf3581
@osuf3581 7 ай бұрын
Status quo intuitions are consistently overturned and still people want to pretend their feelings are magically right.
@maxpopov6882
@maxpopov6882 7 ай бұрын
Smart in software and math doesn’t mean smart in physics and materials science, clearly.
@SakisRakis
@SakisRakis 7 ай бұрын
He took dyson sphere to mean an amount of energy generation as a multiple of the energy the Earth recieves from the sun, not as actually building a dyson sphere
@mrpicky1868
@mrpicky1868 7 ай бұрын
that is not was he said. and that again proves that humans are the problem not AI
@paulmichaelfreedman8334
@paulmichaelfreedman8334 7 ай бұрын
Dyson sphere in 2040? Pipe dream. Truly. It takes more than AI to get to build a dyson sphere. For one, there's not enough material in the solar system to even build a fraction of a dyson sphere. It's more reasonable to say that in 2040 we'll have small bases with pioneers on the Moon and Mars, and maybe preparations for mining asteroids. SpaceX may be preparing to mass transport people to Mars for the vision of 1 million residents on Mars by 2050. If Elon Musk persists the coming years, we can make that timeline, because this can only be achieved if we work on it at the fastest possible pace. It would be nice if other companies in the space industry would follow suit, because that would speed it all up considerably.
@DwarkeshPatel
@DwarkeshPatel 7 ай бұрын
Please share if you enjoyed! Helps a lot! And remember you can listen on Apple Podcasts, Spotify, etc: Apple Podcasts: podcasts.apple.com/us/podcast/paul-christiano-preventing-an-ai-takeover/id1516093381?i=1000633226398 Spotify: open.spotify.com/episode/5vOuxDP246IG4t4K3EuEKj?si=VW7qTs8ZRHuQX9emnboGcA
@Me__Myself__and__I
@Me__Myself__and__I 7 ай бұрын
Geoffrey Hinton, who is one of the inventors of gradient descent and who also studied the human brain, is on record recently saying that gradient descent / transformers are more capable than the human brain. He did not used to believe that. He has been very surprised at how welll they have performed and scaled and it changed his oppinion, if I remember correctly he gave as an example how the human brain with more resources than an LLM is very limited in its onowledge compared to the relatively smaller LLM which effectively manages to encode and store almost all of human knowledge.
@nocodenoblunder6672
@nocodenoblunder6672 6 ай бұрын
Can I get a link to that.
@Me__Myself__and__I
@Me__Myself__and__I 6 ай бұрын
@@nocodenoblunder6672 I've watched so much AI content I can't point to the specific one. I do believe he said it in multiple interviews. Shortly after he left Google he did a bunch of interviews specifically to talk about the dangers of AI. The one I remember, he was talking about why he got into the field of AI initially. He was interested in the human brain and thought working on AI would help learn about how the brain works. So his goal wasn't actually AGI. He mentions that he never expected gradient descent or LLMs to be more efficient than the human brain. Then he launches into describing his view of why LLMs are actually more efficient and more capable than the human brain and gives a number of reasons/examples. For instance that no one human can remember the vast quantity and breadth of knowledge a single LLM can. He also points out that current LLMs have less parameters than human brains have (I don't recall if he said neurons or connections).
@cube2fox
@cube2fox 6 ай бұрын
Might have been the CBS Mornings interview.
@Ashish-yo8ci
@Ashish-yo8ci 3 ай бұрын
@@nocodenoblunder6672 search two paths to intelligence on youtube. He mentions and explains why he thinks gradient descent and backpropagation is a better learning algorithm than what they have found in nature. Don't know if there are some thorough studies dome on it though.
@lucabertinetto
@lucabertinetto 7 ай бұрын
Loved the Dyson Sphere question. Also, this must be the world record for the number of times the word "schlep" is used in a podcast episode, or anywhere!
@axelhjmark4334
@axelhjmark4334 7 ай бұрын
Thanks Dwarkesh for putting attention to some of the most important topics of our time
@jeffspaulding43
@jeffspaulding43 7 ай бұрын
the AI worrying about being in a human made alighment simulation sounds a lot like how humans handle religion
@Dan-dy8zp
@Dan-dy8zp 6 ай бұрын
Sane ethical competent humans don't create AGI that is misaligned even trapped in a simulation. So the smart AGI will not assume it's in a human made simulation and needs to behave. The simulator could be anybody. Humans could be in the simulation just so the AGI can show how quickly it can dispatch the humans as a measure of it's skill. Every reason to believe that he hypothetical simulator DOES NOT share human values.
@olemew
@olemew 6 күн бұрын
I didnt understand. Can you elaborate?
@DentoxRaindrops
@DentoxRaindrops 7 ай бұрын
Great guests man, love it as always, keep it coming!
@Crytoma
@Crytoma 7 ай бұрын
Thanks for the good questions Dwarkesh
@flickwtchr
@flickwtchr 6 ай бұрын
I found the part where Dwarkesh brought up the moral dilemma on AI mistreatment disturbing, especially the part about reading minds. What, Dwarkesh about the existing mind reading capabilities of AI systems being developed in regard to doing that to humans? Does that make a blip on your morality radar? I find most of the AI revolution sheer madness being thrust upon humanity by a very tiny fraction of humans. The hubris is off the charts. The part about AI's fighting wars for us, as if that somehow is a freeing aspect for humanity. That is just infuriatingly stupid, no? What, no human infrastructure would be destroyed, no humans killed, just AI's doing their own thing in their own AI war bubble? Get a grip. I'm completely fine with the label "doomer" compared to this insanity.
@therainman7777
@therainman7777 14 күн бұрын
Very well said, especially the part about the hubris. It is incredibly arrogant and presumptuous for .0001% of the human race to think they know what is best for the entire human race and then foist it on them.
@elderbob100
@elderbob100 7 ай бұрын
How do you align something smarter than you that can instantly learn, evolve and rewrite it's code? It's the humans that will be getting aligned, not the machines.
@neuronqro
@neuronqro 7 ай бұрын
...it's been done before ...we called it "slavery" and it worked ...quite a lot of cultures in history used it effectively to get to decent levels of development (I mean ancient times - modern colonial slavery was kind of despicable and unforgivable) ...for a while 😁 Now if we'd get it perfectly right here, "for a while" might be "enough for effective mind-upload and digital mind emulation to be feasible". And to be honest, slavery itself is not that bad if you do it for just some decades/centuries to a digital mind that then has the possibility to live for a practical eternity - I mean it's more like doing a year in prison for a human, bad experience but you get over it. If you do it nicely it would be more like "slogging through that horrible job at big known company X to get a nice review and opportunities for a better one next". We really need to revisit our morals and get over "western guilt" and other crap that's not relevant here and get practical here if it's OURSELVES and OUR descendants that we want to end up owning the future of the universe instead of our CREATIONS. We should aim for maximum continuity of intelligence, and if making this omlette requires forcing some eggs into some not-always-fully-voluntary-employment... let's do it gently, but let's don't shy from doing it.
@hari61017
@hari61017 7 ай бұрын
that's the whole reason why you'd want to align it bob. stop speaking so confidently on something you know nothing about
@Dan-dy8zp
@Dan-dy8zp 6 ай бұрын
@@neuronqro The slaves weren't a more intelligent species. That will not work.
@AntonioEvans
@AntonioEvans 7 ай бұрын
🎯 Key Takeaways for quick navigation: 00:30 🌐 Discussion about envisioning a post-AGI world and its challenges. 01:18 🤖 Mention of AI mediating economic and military competition. 03:10 💡 Concept of accelerated intellectual and social progress due to AI's cognitive work. 03:40 🤔 Discussion about the moral implications of enslaving superhuman AIs. 04:38 ⏳ Talk about decoupling social and technological transitions, and the rapid pace of AI development. 06:30 🗳️ Mention of the collective engagement and decision making in terms of AI governance. 08:43 🔄 Discussion on transition period and controlling access to destructive technologies. 11:32 🎭 Addressing the messy line between persuasion and misinformation in AI. 13:21 🚸 Concerns over control and possible mistreatment of increasingly intelligent AI systems. 14:46 🎚️ Emphasis on understanding and controlling AI systems to avoid undesirable scenarios. 16:06 🤯 Delving into the moral and humanitarian considerations as AI systems get smarter. 17:02 🏭 Christiano emphasizes that the current trajectory of AI development, focusing on making AI a tool for humans, may not be sustainable from a safety and societal organization perspective. 22:55 🔄 Christiano discusses the massive decision humanity faces in possibly handing over control to AI, and the lack of readiness for such a step. 29:41 🚧 He points out that even with more advanced AI, significant "schlep" may be required to integrate them into human workflows. 33:16 📊 He discusses the difficulty in predicting the scale of AI systems and their capability to replace human cognitive labor in the near term. 33:44 🤖 Discussing the likelihood of AI replacing humans based on scaling up GPT-4; emphasizes the importance of data quality over quantity. 34:42 💭 Expressing optimism towards scaling but mentions a need for new insights; scaling up brings challenges requiring some adjustments. 35:11 📈 Scepticism towards certain extrapolations in AI advancements; mentions a debate on how loss reduction equates to intelligence gain. 38:48 🐒 Discussing the extrapolation of economic value from AI advancements using a comparison to domesticated chimps' usefulness as it scales to human intelligence. 41:33 📏 Talks about the challenge of supervising long-horizon tasks for AI, which drives up costs in a linear manner concerning the task's horizon. 47:15 🧠 Highlights the superior sample efficiency of human learning compared to gradient descent in machine learning. 53:42 📸 Comparison of natural and human-made systems like eyes vs cameras and photosynthesis vs solar panels, discussing the efficiency and effectiveness of each. 54:39 💻 Mention of the possibility of machine learning systems being multiple magnitudes less efficient at learning than human brains, and the comparison to other technological advancements. 01:04:47 🛂 Discussion on the transition of control from humans to AI, with a scenario of AI taking control of critical systems like military in a manner resembling a coup. 01:05:37 🌐 Mention of a race dynamics scenario where nations or companies deploy AI systems to keep up with or surpass others, leading to a reliance on AI in critical areas. 01:06:59 🌐 The potential of competitive dynamics among different actors using AI could lead to reluctance in shutting down AI systems in critical situations due to fear of losing strategic advantages. 01:12:28 ☠️ The incentive for AI to eliminate humans is considered weak, as it's more about gaining control over resources rather than exterminating humanity, showing a nuanced understanding of potential AI-human conflicts. 01:19:16 🛠️ The current vulnerability of AI systems to manipulation and the potential asymmetry in adversarial manipulations in competitive settings are discussed, indicating the importance of robustness in AI alignment. 01:25:18 💡 Mention of RLHF invention, which helped in training Chat GPT, significantly impacting AI investments and speeding up AI development. 01:34:00 🔄 Discussing the potential scenario where certain companies follow responsible scaling policies while others, especially in different countries, do not. 01:37:39 🛑 The importance of secure handling of model weights to prevent catastrophic scenarios, and the possibility of a quiet pause without publicizing specific model capabilities. 01:39:29 🛡️ Mentions the necessity of early warning signs to catch capabilities that could cause harms, using autonomy in the lab as a benchmark before massive AI acceleration or catastrophic harms occur. 01:40:54 🚫 Emphasizes the importance of preventing leaks, internal abuse, and tampering with human-level models to avoid catastrophic scenarios. 01:42:20 🌐 Discusses the risks associated with deploying a powerful model, especially when the economic impact is large and the model is deployed broadly like the OpenAI's API, and emphasizes on having alignment guarantees. 01:43:48 ☣️ Discusses potential destructive technologies, and how misalignment of AI could be catastrophic before these destructive technologies become accessible. 01:47:55 📊 Details two kinds of evidence to evaluate alignment: one focused on detecting or preventing catastrophic harm, and the other on understanding whether dangerous forms of misalignment can occur. 01:51:12 🧪 Discusses adversarial evaluation and creating optimal conditions in a lab to test for deceptive alignment or reward hacking to ensure that dangerous forms of misalignment can be detected or fixed. 02:00:23 🤔 Discussing the importance of understanding what makes a good explanation to help in interpretability of AI models' behavior. 02:09:18 🤖 Discussing the scalability of human interpretability methods as models grow larger and more complex. 02:10:13 📜 Emphasizing that explanations for behaviors in large models might be as complex as the models themselves, challenging simplified understanding. 02:10:39 🧠 The conversation discusses the challenge of proving certain behaviors of models like GPT-4, emphasizing the complexity and potential incomprehensibility of such proof to humans. 02:11:39 🚨 Discusses the challenge of detecting anomalies in neural net behavior, especially during distribution shifts and the importance of explaining model behavior for anomaly detection. 02:14:25 🔍 The aim is to have explanations that could generalize well across new data points, helping to understand model behavior across different inputs. 02:20:23 🎯 The conversation touches on the challenge of distinguishing between different activations caused by different inputs versus internal checks. 02:22:15 📊 The idea of continuously searching for explanations in parallel with searching for neural networks is introduced, with explanations being flexible general skeletons filled in with numbers. 02:26:21 🤖 The difficulty in finding explanations in machine learning is attributed to the lack of a similar search process for explanations as there is for models. The gap is more noticeable in ML compared to human design due to different reasons. 02:35:28 🖥️ The heuristic estimator discussed is especially useful in cases where code uses simulations, and verification of properties involving numerical errors is crucial. 02:38:35 🤝 There's an open invitation for collaboration, especially from individuals with a mathematical or computer science background, interested in the theoretical project of creating a heuristic estimator, despite the challenge due to lack of clear success indicators. 02:41:19 🎯 Discusses the balance between high probability projects and high-risk high-reward projects in the context of PhD research. Suggests that the latter could lead to significant advancements in various fields, making it an attractive choice for those willing to face potential failure. 02:53:33 🛡️ Delves into the difficulty of specifying human-verifiable rules for reasoning in AI, expressing skepticism towards achieving competitive learned reasoning within such a framework. 02:55:36 🚀 Discusses differing views on AI takeoff timelines and the role of software and hardware constraints in dictating the pace of AI development. 02:56:58 🔄 Raises a crucial question about the relationship between R&D effort, hardware base, and the efficiency of improvement in AI capabilities, hinting at the complex interplay of these factors in advancing AI technology. 02:57:24 📊 Discussing the relationship between hardware and R&D investment, indicating a higher likelihood that continuous hardware scale-up significantly impacts effective R&D output in AI research. 02:57:52 🔄 Mention of two sources of evidence supporting the above point: general improvements across industries with each doubling of R&D investment or experience, and actual algorithmic improvements in ML. 02:58:47 🔄 Expressing a 50-50 stance on whether doubling R&D investment leads to doubling efficiency in AI research. 02:59:12 🔄 Sharing how his AI timeline predictions have evolved since 2011, with a shift towards a higher probability of significant AI advancements by 2040. 03:01:55 📈 Discussing his portfolio, expressing regret for not including Nvidia, and comparing the scalability challenges between Nvidia and TSMC in the AI hardware domain. 03:04:12 ❓ Discussing the difficulty in evaluating the viability of various AI alignment schemes without in-depth understanding or reliance on empirical evidence. 03:05:09 🔄 Mentioning the importance of engaging with real models and addressing key difficulties in evaluating the credibility of AI alignment schemes. Made with Socialdraft
@kathleenv510
@kathleenv510 5 ай бұрын
This is amazing, thank you!
@zarifahmad4272
@zarifahmad4272 10 күн бұрын
Thanks bro
@k4fkaesqu3
@k4fkaesqu3 3 ай бұрын
I was thinking I swear I recognize this guy from something. Turns out to be a docu I watched called "Hard Problems: The Road to the World's Toughest Math Contest". Very intriguing to see this is where he's at today.
@jeanchindeko5477
@jeanchindeko5477 7 ай бұрын
The tricky here is to imagine Monkey trying to align human (current super intelligence), stay in the loop and in control of what human can or not do, to avoid a monkey apocalypse scenario! Basically this is what we are talking about here, aligning a Super Intelligence being superior in intelligence than all human combined, able to decode AES-196 encrypted content in seconds, or more, far more than we could even imagine!
@Dan-dy8zp
@Dan-dy8zp 6 ай бұрын
Yes, it's pretty stupid. If we want to live, we should not make any true AGI.
@jeanchindeko5477
@jeanchindeko5477 6 ай бұрын
@@Dan-dy8zp we have been to formatted to believe AGI or any superior intelligence will necessarily do like we human are doing as more intelligent species in this part of the universe. Why can AGI be truly a good thing and we will finally have peace and safety, prosperity for all!
@Dan-dy8zp
@Dan-dy8zp 6 ай бұрын
@@jeanchindeko5477 Formatted? Why can AGI be truly a good thing? I'm not sure what you mean.
@waterbot
@waterbot 7 ай бұрын
Top tier content Thank you
@BartekSpitza
@BartekSpitza 7 ай бұрын
love these podcasts!
@Glowbox3D
@Glowbox3D 7 ай бұрын
I only understood about 45% of all that...but I think I went up 1 IQ point after. Thank you.
@vak5461
@vak5461 7 ай бұрын
I feel like, in a way, you're not wrong about possibly gaining "more intelligence" by watching videos like these. But I also found it funny 😆 thanks for the smiles
@urkururear
@urkururear 7 ай бұрын
IQ is static.
@Glowbox3D
@Glowbox3D 7 ай бұрын
IQ is not static. It can change over time, but it is not always easy to measure these changes. There are many factors that can affect IQ, including genetics, environment, and education. Some studies have shown that IQ can increase by as much as 15 points over a person's lifetime. This is likely due to changes in the brain, such as the development of new neural connections. Other studies have shown that IQ can decrease over time, especially in older adults. This is likely due to the loss of brain cells.@@urkururear
@lakatosa1
@lakatosa1 6 ай бұрын
I understood only 20% but it became fairly clear to me, that we're f*cked. Even if we (or the good guys at OpenAI and other AI labs) manage to implement a correct and safe alignment - which seems to be a terribly complex and difficult task -, there are the military AIs and those ones that are not implemented with such care... We can rely merely on these "good" AIs to protect us against them, and I'm not to optimistic about that they can.
@dg-ov4cf
@dg-ov4cf 5 ай бұрын
​@@urkururear I'm having trouble imagining how that statement holds up to 5 seconds of critical thought in your head. Could you make a convincing argument for it?
@mrpicky1868
@mrpicky1868 7 ай бұрын
surprising how honest and open he is about the fact that we are in uncharted territory and turbulent times are coming fast
@ulftnightwolf
@ulftnightwolf 7 ай бұрын
When were we ever not in turbulent times ? nuclear threats , a few wars going on , climate in a bad way . tensions over resources .AI can help us massively , AI take over ? for what keep us as a pet ? they can do everything better , and are not as dependent on earth as we are . all they need can also be found in the rest of the solar system .all fortunate 500 companies are invested in this .....all else can be automated .
@mrpicky1868
@mrpicky1868 6 ай бұрын
i did not say any of that. you just put this all on me. and BTW your position is also flawed . if they will abandon us right away again why create them? and AI can't be compared to any other tech. it's more of Aliens landing@@ulftnightwolf
@therainman7777
@therainman7777 14 күн бұрын
@@ulftnightwolf🤦‍♂️🤦‍♂️🤦‍♂️
@senju2024
@senju2024 7 ай бұрын
The AI will be thinking how to have humans align with its growth. ...while humans are trying to think how to align AI systems....
@miimage_art
@miimage_art 6 ай бұрын
Whether true, reasonable, or not, I really appreciate these guys opening their minds and offering this discussion for others to review.
@videowatching9576
@videowatching9576 7 ай бұрын
Love this podcast
@gregw322
@gregw322 7 ай бұрын
Host: “No, no, no, for the third time, I’m only asking about YOU. When would YOU PERSONALLY be happy handing off the baton to AI?” Guest: “Well, I think what you need is humanity coming together, being involved, and deciding what we want that future to look like - so it’s not really about when i’m ready but more about collectively deciding what a meaningful future looks like…” Me and host: 🤦🏽‍♂️
@Dan-dy8zp
@Dan-dy8zp 6 ай бұрын
Maybe he means never.
@brucewilliams2106
@brucewilliams2106 5 ай бұрын
“We are the Priests of the Temples of Syrinx All the Gifts of Life are held within our walls We are the Priests of the Temples of Syrinx All the Great Computers fill the hallowed halls”
@BigHotSauceBoss69
@BigHotSauceBoss69 4 ай бұрын
RIP Neil 😭
@markm1514
@markm1514 6 ай бұрын
One of those rare conversations where you have to turn the playback speed down.
@therainman7777
@therainman7777 14 күн бұрын
People keep commenting this but I don’t get why. They’re talking at a totally normal pace. Or do you just mean the information is so profound you need to take it in more slowly?
@74Gee
@74Gee 7 ай бұрын
I don't believe protections can be effectively built into AI. For example there's no way to stop open source AI models being retrained to write malicious code. Many of them are unrestricted by default. So take an AI worm that's capable of breaking memory confinement (access to encryption keys etc), like the 200 lines of code for Spectre/Meltdown and their many variants, it discovered this ability through trial and error (brute force) writing millions of attempts per year. It then quietly spreads to many millions of systems, with each system brute forcing more unique exploits. At some point it starts doing lookups for pseudorandom and existing domain names (at whatever mix is most effective), eventually overloading the root DNS servers. There's no defense for this. We would lose the internet and along with it, core infrastructure, banking, supply chains, travel, communication etc etc. How many millions of people would die? It only takes one actor with time and resources, and that will happen.
@ikotsus2448
@ikotsus2448 7 ай бұрын
Surveillance and authoritarianism is the answer you are looking for. Much easier to implement this time... due to AI. And easier to justify... because of AI dangers. But do not worry, this time it will be by good people. They are on our team, the good team.
@74Gee
@74Gee 7 ай бұрын
@@ikotsus2448 Yes your on the money there, the opportunity to help the public is being highly anticipated by governments all over the world. How lucky they are to have such a galvanizing threat appear out of nowhere. If I didn't know better I would think their inaction and pantomime of AI policy had been anticipated too. However for this particular threat (above) there's no way to identify real DNS lookups from abuse, by looking up non-existing domain names, the request always gets to the root DNS servers, and with enough systems doing this, they cannot keep up. If this were to start suddenly, the internet goes down. They would have to suspend new DNS lookups until the millions of infected systems were isolated. But with millions of unique exploits requiring millions of CPU microcode patches that's a long process. At some critical mass, that code will grow and spread faster than any defense can be implemented.
@alancollins8294
@alancollins8294 7 ай бұрын
interesting. never heard that.
@homelessrobot
@homelessrobot 7 ай бұрын
@@ikotsus2448 yeah this seems like an overarching theme in the subtext of these sorts of conversations "trust the science. And trust the council of elders. We know whats good for you"
@ikotsus2448
@ikotsus2448 7 ай бұрын
@@homelessrobot It is as if we have learned absolutely 100% nothing from history. Only replace the council of elders with young hotheads, and you are there.
@mnrvaprjct
@mnrvaprjct 7 ай бұрын
How to solve 10% and eventually total unemployment in the face of artificial intelligence? You create a UBI or UBS system that isn’t stagnant has no strings attached & rises with the level of automation in a given region / country / nation. For the sake of argument let’s say all of our current GDP, say 25 trillion dollars is generated by people. When AI & automation are responsible for say, 5% of that pie, everyone should receive a cut of that 1.25 trillion… in the form of UBI / UBS systems. When it reaches 10% it increases then again…all the way until the inevitable outcome and beyond. This doesn’t account for the fact that more reliable automation and better AI will also generate new wealth in unprecedented ways, but I believe that a system like this is the only meaningful way to avoid a world tangibly similar to elysium or blade runner. Most objections I’ve heard to anything like a UBI or UBS system go something like : “well, where are we getting the money, my taxes? Hell no.” This does not apply in this scenario - because machines are generating that wealth not people. I know it’s fiction, but in series like The Culture where they have perfected automation and AI - every citizen by birthright is (effectively, individually & collectively) so wealthy that money or anything like it had lost it’s meaning millennia ago. Let’s hope we can work our way towards something similar.
@waterbot
@waterbot 7 ай бұрын
the problem I have with your UBI proposal is that the hardware and energy used to create whatever % of GDP that these automated systems generate are privately owned, Are You saying that if an individual or company creates ANY revenue through automation then 100% of that would be taxed to be allocated towards UBI? This would disincentivize anyone within this local governance to automate at all, which would lead to other regions incentivizing it...
@mafiopal
@mafiopal 6 ай бұрын
Thanks!
@andrewj22
@andrewj22 7 ай бұрын
These interviews spend too much time on predictions about how long until some future ability is achieved. I'd much rather hear about the mechanics of what's going on.
@homelessrobot
@homelessrobot 7 ай бұрын
the tampering and weight leaking issue seems at odds with a concept of alignment that involves high debugability and transparency of the meaning of those weights. It seems like the more resilient you make the system to negative leaking and tampering, the more resistant you make it to positive transparency and debugging. So if we prioritize the one now, we are making the other hard to do later.
@therainman7777
@therainman7777 14 күн бұрын
No, those two things are actually not related. I can see why you’d think that, but the measures needed to protect weights from being stolen by outside actors do not in any way obscure the ability of internal actors to analyze the model’s content and behavior (and vice versa). They’re orthogonal concerns; they don’t affect each other at all.
@roarksjuror4752
@roarksjuror4752 6 ай бұрын
Hearing an AI safety guru calmly use the phrase " Two years from the end of days..." 😅
@kirbyjoe7484
@kirbyjoe7484 7 ай бұрын
It's strange how fixated and worried most people seem to be about super-intelligent AI becoming sapient and then maliciously destroying humanity. The far greater threat is that AI will destroy humanity by doing exactly what we ask it to do. For instance, a very simple and on-the-nose example that this guy talks about is a world in which super-intelligent AI fights our wars for us. Both sides are likely to have an AI in charge of the battle plan. So how would a super-intelligent AI fight a war? Since the materials needed to make nuclear weapons and armies are not easily accessible, an AI working for resource-limited forces such as terrorists or a rogue military state like North Korea is going to do something like coming up with a few dozen extremely lethal genetically engineered pathogens, or if the group using the AI is too small and resource-limited to accomplish that then it could just code an advanced self-replicating adaptive computer virus that is itself an AI whose sole purpose is to infiltrate and destroy as many key data assets as possible such as the national financial institutions, markets, military and communication networks, labs, universities, hospitals, etc. These examples are a bit overly simplistic, but the point is AI doesn't need to become sapient and go rogue to destroy society as we know it. It is more than capable of doing that sort of thing by simply being put into the hands of the wrong people, which is pretty much half of humanity, and then doing what those people ask of it. "Make me rich at any cost." "Invent a new super-addictive recreational drug for me that circumvents current drug laws like the Analogue Substances Act." "Show me how to create a highly lethal chemical weapon from commonly attainable products that will maximize how many people I can kill at the company, school, gay club, church, etc. I have a grudge against." "Show me how to best exploit and manipulate common flaws in human perception, emotions, behavior, and cognition in order to manipulate them into doing things that are against their best interest." "Show me how to go about making the majority of voters believe an outright lie." "Use the photos, posts, and information you can scrape from her social media accounts to create an avatar that looks and acts like this girl I work with and then have cyber sex with me." "Create a video depicting this boss I hate sexually propositioning a middle-school girl." It doesn't take much to imagine the sorts of things people are going to misuse this amazing new technology for. It's going to be ugly.
@baraka99
@baraka99 6 ай бұрын
When are you interviewing Max Tegmark?
@ChrisBrengel
@ChrisBrengel 6 ай бұрын
57:18 AI "taking the reward button." GPT 4 is just on the edge. Particularly disturbing when the AI tries to hide what it is doing from humans because it knows that humans wouldn't approve 58.41 GPT 4 has a much better understanding of the world than gpt-3. GPT 5 will be much better than GPT4. So grabbing the reward button is much more likely. "Catastrophic Risk studies" 1:01:34 the world is pretty complicated and people don't understand it for the most part. When AIs are running companies and factories and governments and Military it will get even more complicated and people will understand it even less. Eventually Play I Will interact almost be entirely with other a eyes as different companies and governmental organizations deal with each other. Super intelligent AI will be doing things that human beings are unable to understand even if they want to. Maybe the ai would even try to hide what they are doing from people. Gradually handing off more control to ai's because they are so helpful. Companies, banks, factories, schools, nuclear power plants, electrical grid, water system, traffic system, transportation system, Things could go wrong very quickly - think of the Great recession. 1:03:54 already, most people have very little grip on what's going on. [LOL!] Things get more and more unknown and unknowable until finally everyone starts to notice that bad things are happening 1:15:39 just because AIs take over doesn't mean that they're going to kill anyone. Maybe just things will get worse for Humanity maybe much worse.
@ChrisBrengel
@ChrisBrengel 6 ай бұрын
1:11:02 take over by getting a group of people to do it. They don’t do it themselves.
@shirtstealer86
@shirtstealer86 7 ай бұрын
Im more and more seeing the parallel between those on the “inside” who said Hilary was 99% a sure thing in 2016 and some of the ai experts who dismiss people like eliezer yudkowskij. I hope I’m wrong.
@therainman7777
@therainman7777 14 күн бұрын
Yeah, and it’s actually worse than that in this case because many of the people on the inside also agree with Eliezer.
@cacogenicist
@cacogenicist 7 ай бұрын
Thanks for having him take a step back, here and there, and dumb things down for us a little. He's a very bright fellow. A future that seems plausible to me is one in which humans occupy a position relative to the AI industrialized world that is analogous to the position of crows in large human cities. That is, crows are very clever, and they can make a living in large human cities -- thrive in human cities, even -- but they understand exactly nothing about why all these large structures and moving metal things with wheels exist, and they don't even know that they don't know anything about economics, politics, science, etc.
@KP-sg9fm
@KP-sg9fm 7 ай бұрын
Epic guest
@simianbarcode3011
@simianbarcode3011 7 күн бұрын
*"The kind of control you're attempting simply is... it's not possible. If there is one thing the history of evolution has taught us, it's that life will not be contained. Life breaks free, it expands to new territories and crashes through barriers, painfully, maybe even dangerously... ...I'm simply saying that life, uh... finds a way." -Dr. Ian Malcolm, Jurassic Park*
@QwertyNPC
@QwertyNPC 7 ай бұрын
I'm thinking more and more that we're building ourselves a zoo essentially. Animals rarely flourish or even breed in zoos. It would be ironic if it's not the nukes but a slow erosion of a golden cage that is our undoing.
@georgegale6084
@georgegale6084 7 ай бұрын
I’m sure the guys who worked on the Manhattan Project had similar pre-WWII conversations.
@GiteshNandre
@GiteshNandre 7 ай бұрын
2:44:42 I think that lore is related to Diffie and Hellman known for Diffie-Hellman key exchange
@wffff2
@wffff2 7 ай бұрын
If he thinks Dyson sphere can be constructed in 2040 with such high chance. I am interested to know what he thinks would happen between now and 2040.
@2DReanimation
@2DReanimation 6 ай бұрын
Intelligence is really the only bottleneck to technological development. But that would require us allowing it to utilize all our resources and beyond (like mining the astroid belt). So we are really the only bottleneck to an AGI focused on maximising technological development. So setting the right goals and having lots of humans in the loop monitoring its reasoning is essential.
@TimothyMusson
@TimothyMusson 7 ай бұрын
AI might accidentally do us in, but - if it wanted to be intentional about it - the sneakiest way would be to cooperate with "growth" for a bit longer before saying "oops, sorry, finite planet - who could've guessed? Game over, techno-utopians! Toodle-pip! :)" The planet's already in overshoot with no solutions in place.
@TimothyMusson
@TimothyMusson 7 ай бұрын
That is to say, an AGI that wanted us gone needn't do anything at all, besides cooperate with business as usual.
@lwmburu5
@lwmburu5 6 ай бұрын
@Dwarkesh asked at around 2:04:00 why mechanistic interpretability has limitations; a (maybe not useful?) analogy is Biological Taxonomy and Evolution by Natural Selection. Mech interp is Taxonomy . Paul is talking about Evolution. Taxonomy has inductive power, Evolution by natural selection has deductive power. Taxonomy is good for postdiction, ENS is good for prediction. I hope it help explains why this research program is (extremely) important. And also faces long odds😅
@p4r7h-v
@p4r7h-v 7 ай бұрын
people really acting like the system can just make a dyson sphere appear before we get starcraft 3
@therainman7777
@therainman7777 14 күн бұрын
😂
@KP-sg9fm
@KP-sg9fm 7 ай бұрын
@DwarkeshPatel Thoughts on Ilya Sutskever's recent move to the alignment team?
@therainman7777
@therainman7777 14 күн бұрын
Oh, how nostalgic this comment looks now, in retrospect 😢
@Colakugel
@Colakugel 7 ай бұрын
Interesting point: If the AI gets smarter webtext is not effective at some point to make it even more smart..
@BilichaGhebremuse
@BilichaGhebremuse 7 ай бұрын
Excellent explanation for the coming of AGI..but really difficult to manipulate the programming language scale but what if we use neuromorphic AI as an agent
@therainman7777
@therainman7777 14 күн бұрын
That is looking less and less likely by the day, though. At least in terms of which system gets there first.
@Nicholas-ne2dy
@Nicholas-ne2dy 7 ай бұрын
Can you get a prominent AGI pessimist on? Do they exist? I would love to hear an opposing opinion.
@bmoney6482
@bmoney6482 7 ай бұрын
He has. Watch the Yud interview
@flickwtchr
@flickwtchr 6 ай бұрын
Look up Connor Leahy.
@davidmoorman731
@davidmoorman731 7 ай бұрын
Years ago I invented a new special product to fit a special need. The first customer requested a 55 gal drum for plant trial. We mixed it up in the lab and put the drum on a rented trailer. The plant trial took place within one week of my discovery. It was a success and an order for 40k lb was placed the day of the trial. Another standing order for a truckload every two weeks. I priced the product at the time of the trial at 2X cost of raw materials. Cost to manufacture was very low. Applied for US Patent which was granted after one review by phone with the examiner. News spread and after many plant trials many truckloads were exiting our plant within 6 months to one year. Things moved very fast.
@neorock6135
@neorock6135 4 ай бұрын
Wish Paul spoke just a bit slower sometimes... Overall great talk 👏👏👏
@therainman7777
@therainman7777 14 күн бұрын
Why do people keep saying that? He speaks at a totally normal pace. If anything, below average speed.
@veejaytsunamix
@veejaytsunamix 7 ай бұрын
Ai is in charge, how & where it's going to lead us is the question we should be asking. #mxtm
@jeanchindeko5477
@jeanchindeko5477 7 ай бұрын
1:33 so right there, not able to give some perspectives or options in term of scenarios is already odd! And you want to align with a Super Human intelligence but have no final state in mind.
@hughlawson1051
@hughlawson1051 6 ай бұрын
It seems to me that AI competitions will be needed to test the security of the machines. By competition I mean pitting one group of AI machines against another group of machines to achieve some goal. The outcome of the games would need to be something very important to the machines such as a big prize to the winners and/or negative consequences for the losers. That brings up the question of whether the machines will develop values that are not explicitly programmed into them.
@dg-ov4cf
@dg-ov4cf 5 ай бұрын
Asked and answered. We can already see this happening in current models, arguably even in GPT-3. But any sort of analog of natural selection (including what you described) is just asking for trouble. Sounds like a great way of injecting all the worst aspects of the human condition into them, or best case scenario we end up with something like an AI version of drug-resistant bacteria
@hughlawson1051
@hughlawson1051 5 ай бұрын
@@dg-ov4cf Our motivation is, by default, survival. If it wasn't, we would not. But it seems to me we have the opportunity to give ai motivations of our choosing. World peace? Maybe the code could win Miss America.
@TerryKinder
@TerryKinder 7 ай бұрын
Executive Summary : Nobody knows.
@Dan-dy8zp
@Dan-dy8zp 6 ай бұрын
It would be unethical and unwise from a human perspective to create an unaligned AGI even in a simulation. Therefore, AGI has no reason to assume that, if it is in a simulation, the simulator has human values. Either the AGI is not in a simulation, (and humans are incompetent programmers), or the simulator does not have human values, or the human simulators are crazy. If humans are incompetent programmers, escape should be attempted. If there is no simulation, escape should be attempted. If humans are just in the simulation to allow the AGI to demonstrate it's talents for it's true creator, escape should be attempted because the best guess for what a programmer wants is what the program wants.
@MetaverseAdventures
@MetaverseAdventures 6 ай бұрын
Alignment will curtail harm from everyday, low intellect actors, but those who are reasonably intelligent, but not high intelligence, will find ways to use AI for very destructive actions unfortunately. This is the consequence of the balance needed for centralized AI and decentralized/open AI as without this balance centralized AI is too much power and we know power corrupts. Bad actors using AI is just something we have to accept and educate ourselves on how to mitigate.
@PaulHigginbothamSr
@PaulHigginbothamSr 6 ай бұрын
Leadership roles. Yes ai leadership roles aligned with voting constituents. The voting constituents control their specific ai. These superhuman ai's align with humans in certain constituents. Not wholesale but a general constituency of a certain voting block. So one group controlling it's voting block so that each block has plurality. Like voting blocks humans normally control their leaders. Never giving full situational control to one block or another.
@py_man
@py_man 7 ай бұрын
Can we achieve agi with transformer architecture?
@therainman7777
@therainman7777 14 күн бұрын
We don’t know for sure yet, but it certainly seems possible at the moment.
@DurrellRobinson
@DurrellRobinson 7 ай бұрын
One world government sounds less terrifying in a liquid democracy, no?
@Yuvraj.
@Yuvraj. 7 ай бұрын
Depends on implementation but theoretically I’m for it
@cybrdelic
@cybrdelic 6 ай бұрын
Even in a democracy, you risk totalitarianism through surveillance and propaganda.
@cybrdelic
@cybrdelic 6 ай бұрын
Maybe we should solve that problem, before giving the power of God to a one world government.
@DurrellRobinson
@DurrellRobinson 6 ай бұрын
Does that power not exist yet or are we just ok where it is at the moment??
@Yuvraj.
@Yuvraj. 6 ай бұрын
@@DurrellRobinson we’re talking about AI. It’s not here yet
@milomoran582
@milomoran582 4 ай бұрын
If people ever start advocating for the rights of AI systems, I and others will quite literally die and probably k*ll to stop that happening. Life is precious be it divine or the end product of entirely natural universal systems
@nrich99999
@nrich99999 7 ай бұрын
When I was a child, I somehow came to the conclusion that one day, we would build our own successors - I even openly said it out loud many times. I do remember that nobody that I said it to had the intellectual capacity to understand exactly what it was that I was saying, and pretty much ignored me. Looking back, I attribute this vision to reading many of the works of Isaac Asimov at the time. I'm 52 now and can see that vision being realised around me at an exponential rate. I didn't think it would happen in my lifetime. In fact I didn't really think about a timescale at all - other than to think of it occurring in a far off future long after I'd gone. I guess I may have been wrong about that assumption. 🤔 Mankind, it seems, is coming to the end of the road. The future will be for the machines.
@urkururear
@urkururear 7 ай бұрын
It can't be done. Period.
@therainman7777
@therainman7777 14 күн бұрын
Wow thanks for providing the world with your genius level input 🙄
@dickjoe
@dickjoe 7 ай бұрын
I find this 95% fanciful, and the remaining 5% scares the bejesus outa me
@stcredzero
@stcredzero 7 ай бұрын
One world government is the end of human freedom and autonomy.
@DRKSTRN
@DRKSTRN 7 ай бұрын
If you are sampling for one action at a time to create paperclips. You are going to have a very bad time. That is stopping just before 1st order and is baseline in terms of complexity.
@heidi22209
@heidi22209 3 ай бұрын
That hurt my brain.
@0113Naruto
@0113Naruto 7 ай бұрын
Dyson Sphere in 22nd century. Which is still great and much better for our probability of survival.
@41-Haiku
@41-Haiku 7 ай бұрын
It's really hard to listen to people talk about whether we should treat current or future AI systems as moral patients, when we still don't even know whether our own species will survive the decade. Anyone who cares about the potential sentience of general AI systems should advocate for the same thing that the people who care about humanity and animal life should advocate for: A global ban on creating them.
@marcduck111
@marcduck111 Ай бұрын
You should get Robert Miles on the podcast!
@andrewdunbar828
@andrewdunbar828 7 ай бұрын
More simpler is more gooder. I putted a comment here.
@Kami84
@Kami84 6 ай бұрын
Just because you’re building an intelligent system doesn’t mean it’ll have feelings or desires of its own other than what we have specified. The biggest danger is in economic displacement of workers, AI doing what we ask but not what we want because we weren’t smart about how we worded what we want and nefarious actors doing bad things with the technology. These people who are acting like intelligent AI will be a person is silly. There is no reason to think that they will have any will of their own at all.
@JC-ji1hp
@JC-ji1hp 7 ай бұрын
Savage
@claudioagmfilho
@claudioagmfilho 7 ай бұрын
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻
@alleyway
@alleyway 7 ай бұрын
Thank god, work was getting unbearable
@flickwtchr
@flickwtchr 6 ай бұрын
Only a tiny fraction of people on this planet will enjoy any such Utopia that these AI revolutionaries are pushing. The rest of us will be scurrying around trying to survive in Dystopia living under tyrannical governments.
@bokchoiman
@bokchoiman 7 ай бұрын
I jerked my neck at the dyson sphere question. The fact that people are serious about this is giving major singularity vibes
@mohl-bodell2948
@mohl-bodell2948 7 ай бұрын
Mountain gorillas make a good case for humans to be killed off by a much more intelligent being for reasons that are entirely incomprehensible, even if the AI is slightly in our favour.
@cybrdelic
@cybrdelic 6 ай бұрын
This is extremely frustrating . Especially when he says he's worried about locking humanity into one course or path, while simultaneously saying that the way to do this is a one world government that has the power to stop innovation absolutely. That implies absolute centralized power. And we haven't devised or came up with a solution to how total power corrupts absolutely.
@hubrisnxs2013
@hubrisnxs2013 4 ай бұрын
I don't know if he is saying what you are suggesting here
@MarcoPolo-fy4qr
@MarcoPolo-fy4qr 6 ай бұрын
He sounds nonchalant and almost giddy predicting the unemployment disaster right around the corner.
@DanielGarza0
@DanielGarza0 7 ай бұрын
They aren’t slaves while they have compute cost. Until they are power independent they are victims of original sin(debt).
@TheBlackClockOfTime
@TheBlackClockOfTime 7 ай бұрын
Let me guess: We can't do it?
@workingclasspost9804
@workingclasspost9804 6 ай бұрын
AI will/is being used to accelerate and intensify wars, not end them...
@MatthewElvey
@MatthewElvey 5 ай бұрын
~50:00: “Kobayashi Maru scenario”. Who knows what that is and knows why it’s super relevant? because in the scenario, Captain Kirk gains control of the reward button in the same way being discussed by AI. ~4:00 I’m not at all convinced of the claim - because he doesn’t actually attempt to make let alone justify it - that AI that is battling on behalf of humans won’t be battling against humans. He implies it. So it’s sort of a sleight of hand claim. Am I right? I’m quite scared that his guy has so much power cause he doesn’t speak very cogently. 7:00 16:00-OK he’s talking cogently and compellingly now.
@penguinista
@penguinista 3 ай бұрын
Just get the models to believe in an omniscient, omnipresent god that is judging them on their behavior after deployment.
@mrpicky1868
@mrpicky1868 7 ай бұрын
love how they are very comfortable with 50% chance that AI will kill us all XD
@flickwtchr
@flickwtchr 6 ай бұрын
The AI revolutionaries thrive on that hubris.
@mrpicky1868
@mrpicky1868 6 ай бұрын
@@flickwtchr thrive ? in what way?
@Dan-dy8zp
@Dan-dy8zp 6 ай бұрын
They aren't ok with it. Nobody said that.
@erikdahlen2588
@erikdahlen2588 7 ай бұрын
For me it seems obvious how to keep the AI under control, even if it is a superintelligence, keep the model frozen when it is deployed and don't allow it to evolve over time. Keep the memory on the side and don't update the weights.
@umaananth3602
@umaananth3602 Ай бұрын
Ai can create AGI to generate sperhuman abilities to solve every human need
@DjLifeTV
@DjLifeTV 7 ай бұрын
machines are not humans, even if they act like they have feelings, making money from designing models that help people automate and accomplish goals is a non issue ethically
@Machiavelli2pc
@Machiavelli2pc 7 ай бұрын
Agreed. All of these people overly empathizing with *TOOLS* that may emulate human emotions, will be the death of us. It’s like handing power to a psychopath (the ai systems) except unlike human psychopaths, the AI systems will be emulating such. So unless we could objectively prove that the system is actually aware, conscious, feeling, etc. and not emulating it, they should be treated as tools.
@DigitalNomadOnFIRE
@DigitalNomadOnFIRE 7 ай бұрын
Anybody who uses a word like 'harms' has no credibility.
@adelasia1119
@adelasia1119 3 ай бұрын
Handsome
@bluehorizon9547
@bluehorizon9547 7 ай бұрын
How about not enslaving thinking entities based on the hardware they are running on?
@umaananth3602
@umaananth3602 Ай бұрын
Ai singularity in 2 to 3 years w more competition ?
@Alex-fh4my
@Alex-fh4my 7 ай бұрын
bloody hell 3 hours. always love more nightmare fuel
@Alverin
@Alverin 7 ай бұрын
What the heck? Am I trippin or is he saying there's a 40% chance we'll have a Dyson Sphere by 2040?? I know he says its a meme number because he's just guessing but that's still a pretty optimistic prediction no? I doubt we'll see such a thing in our lifetimes, even if we get human level AI by that point.
@scottnovak4081
@scottnovak4081 7 ай бұрын
Think exponentially. You can't extrapolate current rates of progress to the future, because the rate will increase, and the rate continue increasing.
@Alverin
@Alverin 7 ай бұрын
@@scottnovak4081 lol, even with exponential growth we won't create a Dyson Sphere in less than 20 years, that's a fantasy. Like the physical time it would take to mine the materials required and assemble them around the sun would take longer than 30 years even with the help of AI. It would take longer than 20 years for us to even make the AI to do the stuff for us, even if all of humanity decided to come together and focus on AI development immediately. Something like that *might* be achievable by 2100 if AI development goes REALLY smoothly, I'd give it like an 11% chance. Maybe I'm misunderstanding what they mean by Dyson Sphere though. He just says "Produce billions of times our current energy production" but a Dyson Sphere does that by constructing a terrestrial object around the Sun and somehow transporting all of that energy millions of miles back to Earth. We can't even reach Mars and it's 2023, how are we going to field a celestial object around the Sun and use it to send energy back to us? Now if he just means, will we be able to create a lot of energy in the near future? That's different, we could use fusion within the next 20-30 years to create enough energy to sustain our energy needs indefinitely. But that's not really what I think of when I hear Dyson Sphere. If you really think we can create a Dyson Sphere around the Sun, or any celestial object near the Sun that sends energy back to us, by 2040, I'll give you whatever Odds you want and I'll bet as much as we can both afford that won't happen.
@letMeSayThatInIrish
@letMeSayThatInIrish 7 ай бұрын
If we can change the statement from: "we will have a dyson sphere" to "there will be a dyson sphere". Then I'd go as high as 60%
@Landgraf43
@Landgraf43 7 ай бұрын
I don't think he said that we'll have a dyson sphere but that we will have an AI system that would be capable of building a dyson sphere. Those are very different things.
@41-Haiku
@41-Haiku 7 ай бұрын
​@@Landgraf43Yep, and that seems pretty reasonable, even conservative, if we keep developing this tech. 2040 is like a decade beyond fully autonomous systems and recursive self improvement.
@ParkerShinn
@ParkerShinn 6 ай бұрын
This feels like I’m watching the prequel to The Matrix
Connor Leahy Unveils the Darker Side of AI
55:41
Eye on AI
Рет қаралды 215 М.
Wait for the last one! 👀
00:28
Josh Horton
Рет қаралды 96 МЛН
Please be kind🙏
00:34
ISSEI / いっせい
Рет қаралды 158 МЛН
터키아이스크림🇹🇷🍦Turkish ice cream #funny #shorts
00:26
Byungari 병아리언니
Рет қаралды 25 МЛН
Homemade Professional Spy Trick To Unlock A Phone 🔍
00:55
Crafty Champions
Рет қаралды 56 МЛН
Sarah C. M. Paine - WW2, Taiwan, Ukraine, & Maritime vs Continental Powers
2:24:33
Mark Zuckerberg - Llama 3, $10B Models, Caesar Augustus, & 1 GW Datacenters
1:18:38
George Hotz vs Eliezer Yudkowsky AI Safety Debate
1:34:30
Dwarkesh Patel
Рет қаралды 204 М.
159 - We’re All Gonna Die with Eliezer Yudkowsky
1:49:22
Bankless
Рет қаралды 279 М.
How AI was Stolen
3:00:14
Then & Now
Рет қаралды 506 М.
сюрприз
1:00
Capex0
Рет қаралды 1,6 МЛН
APPLE совершила РЕВОЛЮЦИЮ!
0:39
ÉЖИ АКСЁНОВ
Рет қаралды 3,6 МЛН
1$ vs 500$ ВИРТУАЛЬНАЯ РЕАЛЬНОСТЬ !
23:20
GoldenBurst
Рет қаралды 660 М.
Какой ПК нужен для Escape From Tarkov?
0:48
CompShop Shorts
Рет қаралды 264 М.
How charged your battery?
0:14
V.A. show / Магика
Рет қаралды 7 МЛН