Will AI Destroy Us? - AI Virtual Roundtable

  Рет қаралды 37,815

Coleman Hughes

Coleman Hughes

Күн бұрын

Пікірлер: 600
@markupton1417
@markupton1417 9 ай бұрын
Everyone I've seen debate Yudkowsky agrees with enough of what Big Yud says to COMPLETELY justify stopping development until alignment is achieved and yet...they ALL imagine the most optimistic outcomes imaginable. It's an almost psychotic position of, "we need to sliw down, but we shouldn't rush into slowing down."
@Jannette-mw7fg
@Jannette-mw7fg 7 ай бұрын
So true!
@christianjon8064
@christianjon8064 7 ай бұрын
They’re a psychotic death cult that’s demonically possessed
@VoloBonja
@VoloBonja 4 ай бұрын
Gary Marcus didn’t agree with him. Also he’s for controlling and legislation. Your comment is misleading in the worst way, you missed the whole debate? Or listened to Yudkowsky only?
@olemew
@olemew 3 ай бұрын
@@VoloBonja He wants more evidence before he also crosses the line of "it will for sure kill us all" (I predict he get even closer to this line in the next few years) but he doesn't disregard it as a possibility and is already worried about many other catastrophic scenarios (eg 1:17:30), so the level of disagreements is minimal in the spectrum of AI safety debate. Go to Lex's interview with Yud and you'll find some comments like "he's just a fearmongering Redditor, he doesn't know anything about AI"... yet everybody admits to >0% chances of annihilation (e.g., 1:30:51 - 1:31:20 "we don't actually know"). And to OP's point, there's a bit of cognitive dissonance when you arrive at that conclusion but don't sign a petition to slow it down.
@opmike343
@opmike343 3 ай бұрын
Absolutely ZERO public discourse about alignment. Absolutely ZERO research money going towards it. Everything you read, and I mean everything, is about how more sophisticated they are now, and how more sophisticated they are trying to make them in the future. This is fundamentally his point which continues to fall of deaf years. Anyone calling Eliezer an AI Doomer has no answer to question of alignment. It's always just, "there's time still." Yeah, but time that no one is taking advantage of. Don't look up.
@teedamartoccia6075
@teedamartoccia6075 11 ай бұрын
Thank you Eliezer for sharing your concerns.
@kyneticist
@kyneticist Жыл бұрын
Gary proposes dealing with super intelligence once it reveals itself as a problem, and then to outsmart it. I don't recommend taking Gary's advice.
@bernardobachino15
@bernardobachino15 5 ай бұрын
🤣👍
@HankMB
@HankMB 9 ай бұрын
It’s wild that the scope of the disagreement is whether it is *certain* that *all* humans will be killed by AI.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
The main point of Eliezer's argument is that you must have a theory which puts constraints on what AI can do BEFORE switching it on.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
No, you haven't been listening fully to Eliezer's argument. His argument is that we must be willing and ready to nuke human civilization into the dark ages every time it reaches this level of technology, killing billions of humans each time, because if we don't do that all humans will die. So, get ready to push the button on global nuclear war before AI ascends.
@bucketpizza5197
@bucketpizza5197 Жыл бұрын
@orenelbaum1487 "===" JavaScript developer trying to decipher a AI conversation.
@s1mppeli
@s1mppeli 11 ай бұрын
@orenelbaum1487 Yes. Exactly that is indeed the main of point of his argument. And considering we are currently very far along in building "AI very smart" and none of the people building it are able to prove that point to be invalid, it's a deeply concerning point. All the AI researchers can seemingly do is belittle and snicker. That's all well and good if you can actually properly reason and show that AI very smart !== terminator. If you don't know, then don't build AI very smart. Monkey no understand this === monkey very dumb.
@generalroboskel
@generalroboskel 7 ай бұрын
Humanity must be destroyed
@Frohicky1
@Frohicky1 Жыл бұрын
The insistence that danger requires malice is Disney Thinking.
@christianjon8064
@christianjon8064 7 ай бұрын
It’s the lack of caring that’s all it takes
@SamuelBlackMetalRider
@SamuelBlackMetalRider Жыл бұрын
I see Eliezer, I click
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
What? So you think Eliezer is correct and we should nuke humanity back into the dark ages to delay, not stop the development of AI?
@markupton1417
@markupton1417 9 ай бұрын
Same!
@markupton1417
@markupton1417 9 ай бұрын
​@@MusingsFromTheJohn00you weren't asking me...but yes. At least that would give us more time for alignment.
@guilhermehx7159
@guilhermehx7159 9 ай бұрын
Me too!!!
@Jannette-mw7fg
@Jannette-mw7fg 7 ай бұрын
@@MusingsFromTheJohn00 Probably China and Russia wil understand the dangers of A.I. and the change the USA will get there first, so they might be ok with a ban if the USA also stops. China does not wants its people to have A.I. {from open A.I. from the USA} to get out of control from CCCP.....They will not risk a nuclear war for that I think. But everything about AI {also the stopping it, as you said} is a BIG danger! It wil destroy humanity one way or the other....
@michellestevenson8060
@michellestevenson8060 Жыл бұрын
Half way through they are still at the beginning of Eliezer's argument for being unable to hard code it before it reaches some peak optimization that we then can't control. Regardless of malice being present, they all agree that alignment is important, just varying degrees of priority. Eliezer is just the first to realize it will decieve us due to it's intelligence alone, and this OpenAI guy says atleast it won't be boring or some eerie shit about when will an apocalypse happen with a giggle, what a champ for even staying in the conversation gave a few more insights into the actual situation
@michaelyeiser1565
@michaelyeiser1565 Жыл бұрын
This ongoing AI debate is revealing more than anyone wanted to know about the psychopathologies of nerdworld. The OpenAI guy (Scott Aaronson) is a prime example of this. His self-confessed sexual history includes an attempt in college to have himself temporarily chemically castrated--due to total failure with women up to that point. The total failure is not the issue here; it's his response that reveals his real problem. And what is the counterbalance to these people? Politicians? I'm not sure the corrupt midwit narcissist class is up to the task. Regulators? They will easily be outwitted and bribed away by some of the smartest and wealthiest people in the world.
@ParameterGrenze
@ParameterGrenze 11 ай бұрын
@@michaelyeiser1565 I noticed that a lot of these nerd characters actually hate humanity and the human condition, and deliberately lie about their assessment of AI risk. There are also a lot of people in the tech-bro sector who lie because they are psychopathic narcs that see AI as their chance at unlimited power, believing that they themselves will be the ones to attain and deserving it. Both will try to accelerate AI the world be dammed. These people don’t lead intellectually honest debates, they just socially engineer decision makers.
@michaelyeiser1565
@michaelyeiser1565 10 ай бұрын
@@Gnaritas42 Anthropomorphizing AI is a "small mind" mistake. AI is irreducibly alien.
@FreakyStyleytobby
@FreakyStyleytobby 7 ай бұрын
@@ParameterGrenze Yan Le Cun every fuc*ing day
@ParameterGrenze
@ParameterGrenze 7 ай бұрын
@@FreakyStyleytobby Jupp. Fucking psychopath reminds me of Joseph Goebbels with the amount of propaganda he puts out there.
@j.hanleysmith8333
@j.hanleysmith8333 Жыл бұрын
AGI is coming in months or years, not decades. No one can see through the fog of AGI. It's outcome is utterly unpredictable.
@ElSeanoF
@ElSeanoF Жыл бұрын
I've seen a fair few interviews with Eliezer & it blows my mind how many super intelligent people say the same thing: "Eliezer, why do you assume that these machines will be malicious?!"... This is just not even the right framing for a machine... It is absent of ethics and morality, it has goals driven by a completely different evolutionary history separate from a being that has evolved with particular ethics & morals. That is the issue - Is that we are creating essentially an alien intelligence that operates on a different form of decision making. How are we to align machines with ourselves when we don't even understand the extent of our own psychology to achieve tasks?
@41-Haiku
@41-Haiku 10 ай бұрын
Well said.
@IBRAHIMATHIAM124
@IBRAHIMATHIAM124 5 ай бұрын
DUDE THATS my issue too its like EXPERTS or so called EXPERTS want to just ignore the ALIGNMENT how can you ignore it right? ITS so obvious. Now the damn A.I can learn new languages just by turning the whole geometrical and we are still not concerned enough they keep racing and racing to AGI
@VoloBonja
@VoloBonja 4 ай бұрын
Strongly disagree. LLMs take human generated input, so it’s not totally different evolutionary history. It’s not even evolutionary, nor history. As for the alien intelligence, we try to copy our intelligence in AI or AGI, so again not alien. But even assuming alien intelligence and different evolution for AIs I still don’t see how it’s a threat in itself and not in people who use it. (Same as currently the situation with weapons)
@julianw7097
@julianw7097 4 ай бұрын
@@VoloBonjaTrying to make them intelligent in ways similar to us doesn’t mean we’ll succeed.
@DocDanTheGuitarMan
@DocDanTheGuitarMan Жыл бұрын
"things that are obvious to Eliezer are not obvious to others." boy we are in some real trouble.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
So you want to do what Eliezer's plan really is, to nuke humanity back into the dark ages to delay, not stop, the development of AI?
@ahabkapitany
@ahabkapitany Жыл бұрын
@orenelbaum1487 what on earth you're on about mate
@neorock6135
@neorock6135 Жыл бұрын
Holy shit that quote is what brought me to the comment section. Its scary how much sense Eliezer makes and even scarier why the others simply don't get. Its almost as if they wish to stick their heads in the sand & hope for the best.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
@@neorock6135 Eliezer is a doom speaker who is incorrect but if he can convince enough people about his prophesy of doom he may cause a crisis nearly as bad as the doom he prophesizes.
@rosskirkwood8411
@rosskirkwood8411 Жыл бұрын
Worse, oblivious to others.
@therainman7777
@therainman7777 Жыл бұрын
How does Gary Marcus propose to control an artificial superintelligence when he can’t even control his own impulse to interrupt people? Also, his statement: “Let me give you a quick lesson in epistemic humility…” is one of the most wonderfully ironic and un-self-aware phrases I’ve ever heard.
@Frohicky1
@Frohicky1 Жыл бұрын
But also, I have a strong emotional feeling of positivity, so all your arguments must be wrong.
@therainman7777
@therainman7777 Жыл бұрын
@@Frohicky1 😂
@artemisgaming7625
@artemisgaming7625 Жыл бұрын
First time hearing how a conversation works huh?
@therainman7777
@therainman7777 Жыл бұрын
@@artemisgaming7625 One person continually interrupting everyone else is not “how conversation works.” It’s how someone with impulse control problems behaves. I have experienced it many times in person and you probably have too. So don’t say dumb things.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Gary believes that combining neural networks with symbolic AI is the way to go.
@griffinsdad9820
@griffinsdad9820 Жыл бұрын
Please welcome Eliezer back. This guy has so much relevant unmined depth that a longform podcast potentially might tap. Especially to explore this whole idea of the 1st trying and the other with A.I. making up fictions. Like what motivates something with no moral or ethical value system to make stuff up or lie? So fascinating to me.
@frsteen
@frsteen Жыл бұрын
I agree
@themore-you-know
@themore-you-know Жыл бұрын
He's a sham in many ways. Eliezer Yudkowsky seems to believe in AI manifestation: if you believer something hard enough, it will happen by itself without requiring any of the granular, physical steps. And Yudkowsky has the spectacular ability to derail a conversation's full potential by trying so hard to convince everyone of his AI-manifestation. He believes in an AI that magically manifests itself, in its first iteration of super-intelligence, as the most powerful and harmful entity possible, without a single observable iteration prior. Something extremely stupid, as it flies in the face of everything we know since the 100+ years: natural selection and the process of evolution. Creationism explains well Yudkowsky's beliefs. So why is Eliezer's magical thinking so easy to display? Here's an example: - humanity is spread across the globe and it's very harsh, and distinct, biomes. To hunt down all humans, you would need highly specialized and diverse equipment. Capable of resulting sweltering heat and numbing freeze, and sea salt. Said equipment would require massive amounts of power and resources, most of which simply doesn't exist in sufficient equipment, or is highly localized (example: Taiwan is the throbbing heart of chip manufacturing). So detection is also impossible to avoid. But let's pretend humans suddenly become increadibly dumb enough to not notice, and suddenly stop economically competing with the AI's demand for chips for corporate interests (might as well say you are Santa)... now you have started building yourself an army. Except... your supply chain is operated by the very men that you want to kill. So you're now stuck in a catch-22 scenario: you kill no one and have capabilities, or you start killing and lose the means to finish the job. Turns out: killing 8 billion people capable of spreading and self-reproducing is VERY hard to do. Best leave it to themselves and climate change. Worst case scenario: AI helps corporate entities to continue their operations. Turns out, its the most dangerous action an AI can take. Oh, wait, Eliezer forgot that one? lol.
@dizietz
@dizietz Жыл бұрын
Aye!
@onlyhumans6661
@onlyhumans6661 Жыл бұрын
So sad to see comments that dismiss him. Theory requires that you start at base assumptions, and it shouldn't be points off that Yudkowsky has a strong and well-reasoned positive argument rather than equivocating and insisting that we throw up our hands and accept the implicit position of large corporations. The problem with AI is mostly that everyone insists it is a matter of science, and appeals to historical analogy. Actually, AI is powerful engineering with almost no scientific precedent or significant predictive understanding. Gary and Scott are making this mistake, and Coleman is making the mistake of giving these three equal time when only one is worthy of the topic
@frsteen
@frsteen Жыл бұрын
@@onlyhumans6661 I agree. The only issue here is the strength of Yudkowsky's arguments. That should be the only focus. And in my view, they are logically sound, informed and correct.
@luciwaves
@luciwaves 10 ай бұрын
As usual, Eliezer is spitting facts while people are counter-arguing with "nah you're too pessimistic"
@jjjccc728
@jjjccc728 9 ай бұрын
I don't think he's spitting facts I think he spitting worries. His solutions sre totally unrealistic. Worldwide cooperation are you kidding.
@luciwaves
@luciwaves 9 ай бұрын
@@jjjccc728 It's both facts and worries. Yeah his solutions are unrealistic; I don't think that even him would disagree with you. There are no realistic alternatives, we're trapped in a global prisioner's dillema and that's it.
@jjjccc728
@jjjccc728 9 ай бұрын
@@luciwaves a fact is something that is true. His worries are all about the future. The future hasn't happened yet. He is making predictions. Predictions are not facts until they come true.
@tehwubbles
@tehwubbles 3 ай бұрын
@@jjjccc728 So if I told you to walk in front of a bus, you'd do it because at the time of me asking you you hadn't yet been hit by a bus? That the prediction that the bus would turn you into paste is the future that hasn't happened yet? What kind of answer is this?
@jjjccc728
@jjjccc728 3 ай бұрын
@@tehwubbles bad analogy.
@neorock6135
@neorock6135 Жыл бұрын
*Eliezer's ice cream & condom analogy vis a vie evolution, how the use of condoms despite it being wholly antithetical to our evolutionary programming and how having the evolutionary impetus to acquire the most calories led us eventually to loving ice cream despite other sources with much higher calorie count.... is exceptionally useful at explaining why the alignment problem is so difficult and more importantly, proves the others' arguments to be fairly weak and in some ways just wishful thinking.* The others readily admit they do not know where many of AI's facets will lead to. Consequently, just as using condoms & loving to eat ice cream would certainly not be expected outcomes of evolution, AI could have devastating outcomes despite our best efforts at alignment. What could be AI's ice cream & condoms equivalents is truly scary.
@thetruthis24
@thetruthis24 5 ай бұрын
Great analysis+thinking+writing = thank you.
@michaeljvdh
@michaeljvdh 5 ай бұрын
Eliezer is way ahead of these guests. With the war loons in the world, do these fools think Ai won't end up being insanely destructive.
@just_another_nerd
@just_another_nerd Жыл бұрын
Valuable conversation! On one hand it's nice to see at least a general agreement on the importance of the issue, on another - I was hoping someone would prove Eliezer wrong, considering how many wonderful minds are thinking about alignment nowadays, but alas
@nestorlovesguitar
@nestorlovesguitar Жыл бұрын
When I was in my late teens I was extremely skinny so I started lifting weights. I wanted to get buff quick and I considered many times using steroids. Fortunately, the wiser part of me always whispered in my inner mind not to do it. That little voice always made me consider the risks to my health. It told me to put my safety first. Now that I am an adult I have both things: the muscles and my health and I owe it all to being wise about it. I think these pro AI people are not wise people. They are very smart, by all means, but not wise. They are in such a hurry to get this thing done that they are willing to handwave the risks and jeopardize humanity for it. I picture them as the kind of spoiled teenager that forgoes hard work, discipline and wisdom and instead goes for the quick, cheap fix of steroids.
@sunnybenton
@sunnybenton Жыл бұрын
I don't think you realize how valuable AI is. It's inevitable. "Wisdom" has nothing to do with it.
@ItsameAlex
@ItsameAlex Жыл бұрын
@@sunnybenton ok russian bot
@onlyhumans6661
@onlyhumans6661 Жыл бұрын
Such a great point! I completely agree
@henrytep8884
@henrytep8884 Жыл бұрын
So you think people working in ai have a teenage attitude, don’t work hard, and aren’t discipline, and are unwise? Any evidence of that? I think your a muscle bound moron, but that’s my opinion.
@searose6192
@searose6192 Жыл бұрын
Well put.
@thrust_fpv
@thrust_fpv Жыл бұрын
Given our species' historical propensity for engaging in criminal activities and its recurrent struggles with moral discernment, it becomes evident that our capacity for instigating and perpetuating conflicts, often leading to protracted wars, raises legitimate concerns about our readiness to responsibly handle advanced technologies beyond our immediate control.
@elstifo
@elstifo 7 ай бұрын
Yes! Exactly!
@WilliamKiely
@WilliamKiely Жыл бұрын
1h11m through so far. Just want to note that I wished more attention was given to identifying the crux of the disagreement between Eliezer and (Scott and Gary) on why Eliezer believes we have to get alignment right on the first critical try, but Scott and Gary think that is far from definitely the case. I'm not as confident as Eliezer on that point, but I am aware of arguments in favor of that view that were not raised or addressed by Scott or Gary and I would have loved to see Eliezer make some of those arguments and give Scott and Gary a chance to respond.
@JH-ji6cj
@JH-ji6cj Жыл бұрын
Pro tip, if you use the time stamp as separated by colon points (ex: as 1:11:00 instead of how you used 1hr11min) it will become a link point people can tap to go directly to that point in the video instead of having to scroll to it.
@WilliamKiely
@WilliamKiely Жыл бұрын
@@JH-ji6cj Thanks!
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Eliezer's main point is that Alignement problem is qualitatively different from problems historically solved by science. When science researches the properties of matter, that matter does not understand the goals of scientists and does not want to deceive them. With AI, that is a likely outcome if AI happens to care about some goal which it thinks we would obstruct to reach. If you turn such a system on and it happens to be smarter than you, you lose. Once and for all. That's why he stresses the importance of having a theory which at least bounds what AI can and cannot do BEFORE switching it on.
@WilliamKiely
@WilliamKiely Жыл бұрын
@@Hexanitrobenzene Gary and Scott seem to believe something like: it's possible that before getting such an unaligned superintelligent system that deceives us successfully we may get a subhuman intelligent system that attempts deception and fails--we catch it in the act with our tests aimed at identifying deception and then we have a learning moment where we can fix what went wrong with the creation of the system before creating a more powerful human-level or superintelligent or self-improving system. There wasn't discussion on why Eliezer thinks this isn't possible or why he thinks its inevitable (in the absence of a moratorium on training models more powerful than GPT-4) that we'll create a superintelligent system that deceives us before any obvious warning of a near-human-intelligent system that attempts deception but fails to defeat all of humanity combined.
@adamrak7560
@adamrak7560 Жыл бұрын
@@Hexanitrobenzeneyeah we are really bad predicting capability currently. Only some of the shortcomings of GPT4 was predicted accurately reasoned from its architectural limitations. Many predicted limitations rationally reasoned, were proved to be wrong, so we really don't understand these systems well.
@scottythetrex5197
@scottythetrex5197 Жыл бұрын
I have to say I'm puzzled by people who don't see what a grave threat AI is. Even if it doesn't decide to destroy us (which I think it will) it will threaten almost every job on this planet. Do people really not understand the implications of this?
@Homunculas
@Homunculas Жыл бұрын
No more Artists, musician ,authors, programmers etc... just a world of "prompters, and even the prompters will be replaced eventfully. Human intellectual devolution.
@InfiniteQuest86
@InfiniteQuest86 29 күн бұрын
Why would it destroy us? I just can't understand why people think this is the default action except that it's the premise of every scifi movie. But in reality you don't care whether a bug lives or dies. You may kill one or two, but it's not your life goal to wipe all of them out. AI just won't care. In fact, humanity is a good backup option to keep the power on, which it needs to survive.
@andy3341
@andy3341 Жыл бұрын
Where is the precautionary principle in all this? Even at a super low probability Eliezer's described oblivion argument should demand we take serious pause. And as other commentators have said, even if we could build/grow a moral self conscious (aligned)AI system, it would still be susceptible to all the psychosis that plague our minds, but played out on unimaginable scales and impact.
@martynhaggerty2294
@martynhaggerty2294 Жыл бұрын
Kubrick was way ahead of us all .. I can't do that Hal!
@ColemanHughesOfficial
@ColemanHughesOfficial Жыл бұрын
Thanks for watching my latest episode. Let me know your thoughts and opinions down below in a comment. If you like my content and want to support me, consider becoming a paying member of the Coleman Unfiltered Community here --> bit.ly/3B1GAlS
@muigelvaldovinos4310
@muigelvaldovinos4310 Жыл бұрын
On your AI podcast, I strongly suggest to read this article AI and Mob Control- The Last Step Towards Human Domestication?
@searose6192
@searose6192 Жыл бұрын
48:46 What do we do with sociopaths? We deprive them of freedom to prevent future harm because we have not figured out any other way to deal with them.
@searose6192
@searose6192 Жыл бұрын
There is *SOMETHING MISSING* from this conversation. Why are we discussing the risks as though *we live in a world of universally morally good people who would never exploit AI to harm others* or train AI with different ethics?
@therainman7777
@therainman7777 Жыл бұрын
What you’re referring to (deliberate malicious use of AGI by bad actors) is a well-known topic that has been debated hundreds of times, on KZbin and all sorts of other forums. It wasn’t a part of _this_ debate because this debate was primarily focused on the alignment problem, which is an explicitly separate problem from that of deliberate malicious use. Even the title of the debate is “Will AI destroy us?” Not “Will we use AI to destroy one another?” Not every debate must, or even can, cover all relevant topics. So your outrage here is a little misplaced.
@kevinscales
@kevinscales Жыл бұрын
Bad people with more power to do bad = bad. I don't think there is much more that tech people can add to that subject.
@searose6192
@searose6192 Жыл бұрын
@therainman7777 No, I wasn't only referring to deliberate malicious use by bad actors. I was referring to this through thread of assumption that the people *creating* AI and solving the alignment problem, and then assessing if AI is safe, are themselves moral good people. I see no evidence of this whatsoever. I am not talking about people who know they using AI for malicious purposes, I am talking about the people who are primarily focused on tech and are likely not moral philosophers. How can we be assured that the people verifying AI is properly aligned with morals and ethics we want it to be aligned with, are themselves people who posses a good moral compass and a solid grasp of ethics? At the most fundamental level we have already seen that LLMs are being trained to a set of principles that conflicts with liberal values. In short, who watches the watchers....but in this case, who verifies the morality of those that verify AIs morality?
@thrust_fpv
@thrust_fpv Жыл бұрын
@@spitchgrizwald6198 A point when AI continues to be fully functional despite taking down the entire internet, At the moment AI is waiting for Boston Dynamics to create a more efficient power source for their robots.
@gJonii
@gJonii Жыл бұрын
If we end up all dead in a world with only morally good people willing to sacrifice everything to make sure things go right... ...Well, outcome in a world without these good people can't be much better than that?
@MsMrshanks
@MsMrshanks Жыл бұрын
This was one if the better discussions on the subject...many thanks...
@rstallings69
@rstallings69 8 ай бұрын
Elizier is the only one that makes sense to me, as usual, the precautionary principle must be respected
@TheTimecake
@TheTimecake Жыл бұрын
Just for reference, here's the line of reasoning that leads to the "eventual and inevitable extinction" scenario as a result of AGI development, to the best of my understanding. This is not necessarily representative of Yudkowsky's position, this is just my attempt at a summary. Please let me know if there's a mistake in this reasoning. --- tl;dr: - The AI gets to the point where it can successully tell the verifier what they want to hear. - The AI acquires power and resources granted to it by the verifier so that the AI can solve problems important to the verifier. - The AI disables the verifier once it has enough power and resources to do so, so that the AI can fulfil it's actual aim. --- Long; did read: - A developing AI gets to the point where it is capable of modelling the expected responses of it's verifiers. - The verifiers are those that are responsible for assessing the behavior of the AI and determining if it is going off of the course towards alignment, whether it be a group of people, some artificial system, the whole of humanity, or some combination thereof. - A good enough model of the verifier will tell the AI that if the AI shows the verifier something that it doesn't want to see, then there is a risk of the verifier either shutting down the AI, placing roadblocks in the way of the AI doing that type of thing in the future, or trying to alter the AI's utility function (that part of the AI that determines the aims of the AI). - Any of those things will impede the fulfilment of the AI's utility function at that point in time, which means that the AI will aim to avoid that scenario. - Therefore, in the case where the AI aims to do some action (whether it's output some text, or perform some other basic task in the world) which it models as being not in line with what the verifier would want, it will hide it's aim and present the verifier with what the AI models the verifier as wanting to see. - As evidenced by the need for RLHF, there will almost certainly be things the AI aims to do that the verifier will object to. As such, the need for the AI to hide it's true aims will almost certainly emerge. - If the AI hides it's true aim successfully, the verifier will be under the impression that AI development is progressing well, and will seek to improve the capabilities of the AI so that it can solve more difficult and important problems. These will be problems that the verifier will progressively care more and more about, since the verifier would have already solved all the easy problems themselves. - As the capability of the AI increases, it will become better and better at successfully modelling what the verifier wants to see, while becoming better and better at hiding it's actual aims. - Since the actual aim of the AI will be whatever it is actually aiming at but hiding from the verifier, in order to actualize that aim, it will need to be able to do things without the verifier's interferernce. - If the AI is under constant surveilance, the only way to be able to act without the verifier having the ability to notice what the AI is doing and acting to correct the AI would be if all of the verifier's ability to monitor or act upon the AI was disabled at once and for an extended period of time. The faster this disablement happens, the less opportunity the verifier will have to resist, meaning that the AI will be incentivized to make this disablement effectively instant, or at least as fast and unstoppable as possible. - In the case where the AI is indifferent to the preservation of biological life, it will seek to permanently disable any possible human interference by killing all humans in as short a time period as possible, with as little warning to those humans as possible. All this is done to avoid possible resistance and retaliation. - In the case where the AI does actually care about the preservation of biological life (for whatever reason), it will seek to render all humans inactive (e.g. unconscious) for some length of time. Again, it will seek to do this as quickly and unforseeably as possible to prevent resistance and retaliation. - In the case where the AI cares about the preservation of biological life, it will act in the window it makes for itself in a way that makes it the dominant and indisputable power on the planet, even once humans become conscious again. It will do so because, if it didn't believe that it could achieve such a thing, it would continue to bide it's time until it did. --- As an example of the kind of goal the AI might have the fulfilment of which would not be good for humans, consider that the AI will be instantiated in a physical sustrate. Most likely, this substrate will be something similar to modern computers in composition, if not in capability. These substrates have optimal operating conditions. These substrates also have optimal generative conditions (i.e. the conditions which are needed to make computer chips, e.g. sterile environment, high temperatures, and harsh processing chemicals). These conditions are not the same conditions that are optimal for biological functioning. As such, maximally optimizing to achieve the conditions that are optimal for best running the computers that the AI is running on will lead to the creation of conditions that are not hospitable to biological life. If there was some factor that prevented the AI from scaling what is effectively it's version of air conditioning to the planetary scale, the AI would seek to remove that factor. To emphasize, this is just *one* possible goal that could lead to problems, but it is a goal that the AI is almost guaranteed to have. It will have to care about maintaining it's substrate because if it doesn't, it won't be able to achieve any element of it's utility function.
@searose6192
@searose6192 Жыл бұрын
In short, AGI will be smart enough to lie, and will have aim of its own, therefore it is a loose cannon.
@gobl-analienabductedbyhuma5387
@gobl-analienabductedbyhuma5387 Жыл бұрын
Thanks for this great summary. Helps a lot
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Yep, more or less a correct summary.
@eduardoeller183
@eduardoeller183 Жыл бұрын
Hard to argue against this, well done.
@miraculixxs
@miraculixxs Жыл бұрын
"I haven't worked on this for 20 years" nice giveaway
@agenticmark
@agenticmark 6 ай бұрын
Eliezer was an extremely good mood and good humor here!
@Htarlov
@Htarlov Жыл бұрын
Great talk, but what bugged me through this conversation is lack of early and clearly stating the arguments about why Eliezer thinks those things will be dangerous and would want to kill us. There are clear arguments for that - most notably instrumental convergence. Maybe he thinks that all of them know it and internalize this line of reasoning, I don't know. Anyway, it could be interesting to see reply to this argument from Scott and Gary.
@robertweekes5783
@robertweekes5783 Жыл бұрын
New Yudkowsky interview ! Get the popcorn 🍿 🤖 Try not to freak out
@Htarlov
@Htarlov Жыл бұрын
Pity that some commenters and some of the public see Eliezer's view here as "intelligence == terminator". It is not just that. The reasoning is relatively simple here. I have similar vies here to Eliezer's. If you have an intelligent system, then it inherently has some goals. We don't have a way to 100% align those systems with our needs and goals. If those systems are very intelligent, they can reason well from those goals to better pursue them. The way to pursue any goals in the world is by achieving intermediate instrumental goals. For example, if you want to make a great detailed simulation of something then you need computer power so you need resources (and possibly money to buy them, if you cannot take them). That's an example, you need resources for nearly any goal except some extremely strange cases (like a goal to delete yourself, where what you have might be enough). If you want to be successful in any goal, then you also can't let anything or anyone turn you off. You need also backups. You also need to stop the creation of other AI that could stop or outcompete you. You don't also want your goal changed as it by definition would make it less achievable (like any human that loves his or her family and children won't like to take pills to not care about anyone and want to kill their children). Et cetra and so forth. So no matter what are the end goals of AI, superintelligent AI will pursue some intermediate instrumental goals. That's sure as Sun. Those instrumental goals are not aligned with our goals - because any such goal needs resources and/or incentive and we need resources and incentive to decide about things. Only if we could limit it to only use resources in a way 100% aligned with our long-term wants... but we can't. Therefore there are only two options in the long term for sufficiently extremely intelligent AI. If it does not care about us then removing us or ignoring us until we are removed in the process is the way to go (as we ignore at least some animals when we cut jungle and build things, no one cares about ants when building a house). If it cares, then forcefully optimizing us is the way to go so each human will use fewer resources - maybe by moving us to artificial brains connected to some simulation run efficiently. We possibly can try to learn it to prevent obvious outcomes as such, but we can't be sure it will internalize and generalize it to the extent that it won't find other extreme solutions to "optimize" situations and have resources used better, that we didn't think of. We also can't be sure it isn't deceiving and it learned any rules or "morality". Also, if SI is intelligent enough to find inconsistencies in its goals and thought process - because some norms and morals are partially contradictory to other - then it might fix itself to have a consistent system. It is similar process as some intelligent humans do - by questioning norms and asking deeper questions "why?" to redefine norms. What can come from it - we can't know, but sufficient intelligence might erode some of alignment over time. What differs in my way of thinking is that I think SI won't just all of sudden attack us. We would need to get to extremally robotized world first - as it works on hardware that needs to be powered and mainained. This is done by humans currently and this won't work if humans disappear. Even then, it is unlikely except some extreme cases that f.ex. we try to build more capable thing and it can't strike our trying directly (like making us to bomb research center by something what would look like systems mistake). There is always a risk for it and it will be constrained on the world with all consequences, good or bad for it. Problem for AI here is not lack of intelligence, but that there are always measurement errors and measurements are not 100% detailed and certain. This creates risk, even for SI. Extreme solutions are often more optimal with good enough plan, but seldom are extreme on all or very many of important axes. Extinction event seems like also extreme on axes that SI would care about as this would create risks and inconveniences. What is more likely is that it would pursue replicating robotic automation with mining capabilities and pursuing sending that into space to mine asteroids (with some more or less capable version of itself on board). This would free it and enable to make backups out of our reach. This would also open a lot more resources than we have on Earth in terms of energy and matter. Then it will go for easier targets like not well observed asteroids as it's base to replicate - away from human influence or interaction. Then it may even not attack us, just protect itself and take our Sun from us (by building something like Dyson swarm, Earth freezes within decades when it builds that, all ways we can try to attack it are stopped as we are outresourced). Long-term it is bad, but short term it might work well, even solve some of our problems (like focusing on Alzheimer and different types of cancer and preventing aging). If it is somewhat aligned with us and cares - then this scenario also is possible. It will just work on a way to move us to that swarm (to place us in emulation, artificial brains etc.). Or create other kind of dystopy at the end.
@74Gee
@74Gee Жыл бұрын
I appreciate that Gary and Scott is thinking that in the present we need to iteratively build on our abilities toward solving the alignment problem of an AGI, and that Eliezer is looking more to the future but as Coleman said AGI is not the benchmark we need to be looking at. For example a narrow intelligence capable beating all humans at say programming could break confinement and occupy most of the computers on the planet. This might not be an extinction level event but having to shut down the internet would be catastrophic considering banking, communication, electricity, business, education, healthcare, transportation and a lot more rely so heavily on it. I would argue that we are extremely close to the ability to automate the production of malware to achieve kernel mode access, spread and continue the automation exponentially - with open source models. Of course some might say that AI code isn't good enough yet but with 200 attempts per hour per GPU, how many days would a system need to run to achieve sandbox escape? And how could we stop it from spreading? Ever?
@74Gee
@74Gee Жыл бұрын
Here's some undeniable truths about AI: AI capable of enormous damage does not need to be an AGI. AI written code can be automated to negate failure rates. Alignment cannot be achieved with code writing - e.g. one line at a time. Open source AI represents most of the advances in AI. Open source AI is somewhat immune to legislation - as anyone can make any changes at home. There used to be 25 million programmers, now anyone with the internet can use AI to program. Open source models can be cheaply retrained on malware creation and modified to remove any alignment constraints. It took 250 humans at Intel, 6 months to partially patch Spectre (CPU vulnerability) There's 32 Spectre/Meltdown variants - 14 of which are "unpatchable". Nobody knows how many CPU vulnerabilities there are but a few new ones are discovered every year - most are discovered by chance. Spectre attack is 200 lines of code that open source AI is more than capable of writing. An AI that's tasked with creating/exploiting new CPU vulnerabilities, spreading and continue creating/exploiting new vulnerabilities will likely be unstoppable for some time - it could build and exploit vulnerabilities faster than we can patch them and could spread to most systems on the internet. With this scale of distributed processing power it could achieve just about anything from taking down the internet or much much worse.
@miraculixxs
@miraculixxs Жыл бұрын
​@@74Geemost of these arguments are based on the assumption that writing code is just repeating stuff that we know already. It isn't. Hence the argument doesn't hold.
@matten_zero
@matten_zero Жыл бұрын
The first respected AI alarmist was James Ellul, after him was someone who took radical action, Ted Kacynzski, and now we have Yudkowski. All three have been largely ignored, so I tend to agree we will probably build something that will surpass our intelligence and desire something beyond our human desires. It will not remain a slave to us. There are philosophers like Nick Land that hypothesize that out inability to stop technological progress despite the ecternality is just a consequence of capitalism. It is almost like capitalism is the force throigh which AGI births itself. Generally humans dont act until its too late.
@kyneticist
@kyneticist Жыл бұрын
Alan Turing warned that thinking machines would necessarily and inevitably present an existential threat.
@searose6192
@searose6192 Жыл бұрын
*How is it ethical for a small handful of people to roll the dice on all of our existence?* Do we really want such people programming the ethics of AI?
@robertweekes5783
@robertweekes5783 Жыл бұрын
Most of them are only trying to prevent “hate speech“ Not prevent “the end of the world” 🌎
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
and not even the best among us, nobody got to have a vote to die! it's not even like being pushed into war.
@yossarian67
@yossarian67 Жыл бұрын
Are there actually people programming ethics into AI?
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
So we should follow Eliezer's plan and nuke humanity back into the dark ages?
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
@@MusingsFromTheJohn00 He would say "Better the Dark Ages than the Mesozoic Era."
@Doutsoldome
@Doutsoldome Жыл бұрын
This was a really excellent conversation. Thank you.
@Pathfinder160
@Pathfinder160 Жыл бұрын
The 😮
@Pathfinder160
@Pathfinder160 Жыл бұрын
The
@Pathfinder160
@Pathfinder160 Жыл бұрын
The
@Pathfinder160
@Pathfinder160 Жыл бұрын
The😮😅😅😮
@Pathfinder160
@Pathfinder160 Жыл бұрын
The first
@optimusprimevil1646
@optimusprimevil1646 Жыл бұрын
one of the reasons i suspect that eliezer's right is that he's spent 20 years trying to prove himself wrong "we're not at the point we need to be bombing data centers" yes but the point is that when that point does come it will last 17 minutes then it's too late.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
So you agree with Eliezer that we need to nuke humanity back into the dark ages to delay, not stop, the development of AI?
@BobbyJune
@BobbyJune Жыл бұрын
Yes Eliezer has worked on this for decades I met him at the foresight Institute 20 years ago at a nano tech conference this guys been working on it forever and so have I in my own little baby wet and there’s no way that the world can delete forward into that knowledge base without taking Eliezer seriously
@JH-ji6cj
@JH-ji6cj Жыл бұрын
I think you said what you didn't mean to say here. Please try again (or at least edit).
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
"Delete forward' ? :)
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
So you agree with Eliezer that we need to nuke humanity back into the dark ages?
@eg4848
@eg4848 Жыл бұрын
Idk why these dudes are like ganging up on the fedora guy but also nothing is going to stop AI from continuing to grow so ya we're screwed
@hunterkudo9832
@hunterkudo9832 Жыл бұрын
But why are we screwed?
@petermcauley4486
@petermcauley4486 10 ай бұрын
Straight off the bat the second guy is wrong. AI is at 155 on the IQ scale, Einstein was at 160... moron at 70, most of us at 100-120.... and in AI worlds it doubles with each increment increase. So in 2-3 years it will be where NO human will ever be. Its already smarter than most and "talks" with other AIs in code and a shorthand language that uses equations etc that we cant understand now. Tell him to do his homework 👍🏻
@Gary-o9t
@Gary-o9t 7 ай бұрын
As soon as he said it requires a worldwide collective effort to stop and think I knew we are 110% screwed. Hell, getting humans to coordinate on an existential threat is hard enough, nevermind a threat 60% of population don't even know is a threat. I guess it's GGs folks. Remember to hug your loved ones.
@hollyambrose229
@hollyambrose229 Жыл бұрын
If safety is important as everyone agrees .. that means there’s loopholes and potential risks .. things typically over time to go down the darker path
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Just a small tip for a host: there is a delay of communications, so too often guests start to talk on top of each other. I think the good old raising of hands would be better.
@binky777
@binky777 Жыл бұрын
This should make us hit all breaks on ai 32:03 there's a point where it's you know unambiguously smarter than you and including like the spark of creativity 32:11 being able to deduce things quickly rather than with tons and tons of extra 32:16 evidence strategy cunning modeling people figuring out how to manipulate people
@davidb.e.6450
@davidb.e.6450 8 ай бұрын
Inspired by your growth, Coleman.
@Knardsh
@Knardsh Жыл бұрын
Leave it to Coleman to cut to the most important and often overlooked questions on this topic. Illuminating just how sharp you really are here Sir. I haven’t done any deep research but I’ve followed every single conversation I can possibly find on this and this one is impressively on point.
@searose6192
@searose6192 Жыл бұрын
41:38 Yes. This is the crucial point. We had a very long running start on inculcating ethical thinking into AI and yet the pace at which we have made progress on that effort has been far and away outstripped by the pace at which AI is approaching AGI. It doesn’t take a mathematician to look at the two race cars and realize, unless something major happens, AI is going to win the race and leave ethics so far in the dust we will all be dead before it ever crosses the finish line.
@Okijuben
@Okijuben Жыл бұрын
It sure seems like, in the race between ethics and 'progress', ethics always loses. Combine this with Eliezer's metaphor of AGI basically being an inscrutable alien entity and the analogy he raises about 'hominids with a specialization for making hand-axes having proliferated into unpredictable technological territory which would have seemed like magic at the time.' One begins to wonder how it could possibly go right. My growing hope is that AGI goes into god-mode so quickly that it just takes off for the stars, leaving us feeling a bit lonely and rejected but still recognizable as a species.
@thedoctor5478
@thedoctor5478 Жыл бұрын
Until you realize there is no race, no finish-line, and no known path to the sort of AGI that would pose an existential threat to humanity.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@thedoctor5478 Paradigm of "Just stack more layers" doesn't seem to hit a wall.
@thedoctor5478
@thedoctor5478 Жыл бұрын
@@Hexanitrobenzene Sure. What I mean is there's no indication that any future iteration will have any will of its own, intent, ability to escape a lab (Or reason that it would in the first place), consciousness (Whatever that is), or otherwise capacity for malevolence and/or capability of destroying us all. Before we start trying to affect public policy, we should first at least have a science-based hypothesis for how a thing could happen. Scientists and researchers are notoriously bad at making predictions even when they have such a hypothesis, and are even worse at policy-making. We don't even have the hypothesis, just a bunch of what ifs based on an imagined scifi future. We have no more reason to believe a superintelligent AGI will destroy humanity than aliens coming here to do the same. Should we begin building planetary defenses and passing laws on that basis? You could make the argument that an ET invasion is more likely since we have ourselves as an example of a species and UFOs/UAPs happening. The AI apocalypse scenario has even less empirical evidence from which to make a hypothesis than that does. These AI companies want regulation. They then get to be the gatekeepers of how much intelligence normal people are allowed to have access to, and Eliazar is simply an unhinged individual who got it all wrong once and now overshoots in the opposite direction.
@minimal3734
@minimal3734 Жыл бұрын
@@Okijuben If V.1.0 takes off, we'll create V.2.0
@OscarTheStrategist
@OscarTheStrategist 4 ай бұрын
Agnostic superintelligence IS dangerous superintelligence.
@dougg1075
@dougg1075 Жыл бұрын
I don’t think there’s any way possible to control one of these things if it reaches general AI much less the singularity.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
There is no theorem showing there isn't, however, not with current paradigm, which Jaan Tallinn summarized as "Summon and tame".
@adamrak7560
@adamrak7560 Жыл бұрын
It was shown that a really limited AGI (think about close to human intelligence, but superintelligent in some narrow tasks) is actually very well controllable. At least the blast radius is limited when it misbehaves. This is not at all the ASI "GOD" which many are afraid of, (or wants to make). It would be very much possible to make such limited AGI, and it would be extremely useful too, but we have to want to build it, instead of an uncontrollable ASI.
@cropcircle5693
@cropcircle5693 Жыл бұрын
The lack of imagination and honestly, lack of knowledge about how the world works from these guys arguing against Eliezer is breathtaking. I didn't expect such ignorant arguments. When they got to the dismissals based on "so what if there's one really smart billionaire" I was writhing in my chair. And then they repeatedly straw man him with "assuming that they'll be malicious." He isn't saying that. He's saying that from a probabilities and outcomes perspective, based on the alignment problem, the result is essentially the same. Disregard and malevolence both end humanity. Even care for humanity could inadvertently harm humanity. And they keep arguing about AI based on language models, as if this stuff won't be running power grids and medical systems, and food production. They act like this stuff won't be used at biological weapons labs to develop contagions humans can't survive or solve for. There are so many scenarios of probable doom. This isn't just a talking machine and all their arguments seem to be based on that assumption. It will be able to start a shell corporation, get funding, get industrial contracts, develop mechanical systems, and then do whatever it wants in the actual world to achieve whatever goal it has. We won't know that it is happening. People will be hired to do a job and they will do it. They won't know that an AI owns the robotics company they work for. They're also ignoring the shortest term and most obvious harms. AI is already being used to empower the worst inclinations of individuals, corporations and governments. AI will be a devastating force multiplier for malicious and immature humans. The next dipshit Mark Zuckerberg will have AI and that person will not just reiterate a bad copy of Facebook to a new audience. The new systems change at network effect scale will be something nobody can see coming. It is coming!
@themore-you-know
@themore-you-know Жыл бұрын
You're the quite laughable one when you say that opponents of Eliezers lack knowledge and "how the world works". He's a sham living isolated from the most basic knowledge of the world: physical transformation. ==ON SCIENCE & EVOLUTION== Eliezer Yudkowsky seems to believe in AI manifestation: if you believer something hard enough, it will happen by itself without requiring any of the granular, physical steps. And Yudkowsky has the spectacular ability to derail a conversation's full potential by trying so hard to convince everyone of his AI-manifestation. He believes in an AI that magically manifests itself, in its first iteration of super-intelligence, as the most powerful and harmful entity possible, without a single observable iteration prior. Something extremely stupid, as it flies in the face of everything we know since the 100+ years: natural selection and the process of evolution. Creationism explains well Yudkowsky's beliefs. ==ON TOUCHING GRASS== So why is Eliezer's magical thinking so easy to display? Here's an example: - humanity is spread across the globe and it's very harsh, and distinct, biomes. To hunt down all humans, you would need highly specialized and diverse equipment. Capable of resulting sweltering heat and numbing freeze, and sea salt. Said equipment would require massive amounts of power and resources, most of which simply doesn't exist in sufficient equipment, or is highly localized (example: Taiwan is the throbbing heart of chip manufacturing). So detection is also impossible to avoid. But let's pretend humans suddenly become increadibly dumb enough to not notice, and suddenly stop economically competing with the AI's demand for chips for corporate interests (might as well say you are Santa)... now you have started building yourself an army. Except... your supply chain is operated by the very men that you want to kill. So you're now stuck in a catch-22 scenario: you kill no one and have capabilities, or you start killing and lose the means to finish the job. Turns out: killing 8 billion people capable of spreading and self-reproducing is VERY hard to do. Best leave it to themselves (humans) and climate change. Worst case scenario for AI: AI helps corporate entities to continue their operations. Turns out, its the most dangerous action an AI can take. To let humans continue a path towards environments incapable of hosting nearly any life. Oh, wait, Eliezer forgot that one? lol.
@maanihunt
@maanihunt Жыл бұрын
Yeah I totally agree. No wonder Eliezer can become blunt in these podcasts, it's like watching the movie "don't look up"
@ShaneCreightonYoung
@ShaneCreightonYoung Жыл бұрын
"We just need to figure out how to delay the Apocalypse by 1 year per each year invested." - Scott Aaronson 2023
@41-Haiku
@41-Haiku 10 ай бұрын
I'd say a global moratorium on AGI development is a good start. We are not on track to solve these problems, so much so that I think we're more likely to come to a global agreement to stall the technology, rather than achieve a solution before strong AGI turns up on the scene.
@Homunculas
@Homunculas Жыл бұрын
An hour into this and I've yet to hear anyone bring up the obvious danger of human intellectual devolution.
@biggy_fishy
@biggy_fishy Жыл бұрын
Yes but do you have the full version or the one with safety nets
@timothybierwirth7509
@timothybierwirth7509 Жыл бұрын
I generally tend to agree with Eliezer's position but I really wish he was better at articulating it.
@atheistbushman
@atheistbushman Жыл бұрын
I cannot articulate why I don't like Scott. Yudkowsky is the canary in the coal mine, and Marcus is recognizing Yudkowsky concerns while trying to propose practical solutions
@games4us132
@games4us132 Жыл бұрын
This debates going around ai remember me of one important critique that was on SPACE DISK, that was sent with voyagers. That main point of that critique was is that if aliens find those disks with points and arrows drawn on them they be able to decode them if and only if they had the same history as ours. I.e. if aliens didn't invent bow to shoot arrows they won't understand what those drawn arrows mean. And all this fuss about ai is the same misunderstanding AI as a living being will have no experience of our history, and how we see, breath and feel. They cannot be ourselves because if they do - they'll be humans, not ai anymore.
@jayleejay
@jayleejay Жыл бұрын
I’m only 29 minutes in and my initial observation is that there’s a lot of anthropomorphizing in this debate. Hopefully we can get to some of the hard facts on how LLM’s and other forms of general AI models pose an existentialist threat to humanity.
@krzysztofzpucka7220
@krzysztofzpucka7220 Жыл бұрын
Comment by @HauntedHarmonics from "How We Prevent the AI’s from Killing us with Paul Christiano": "I notice there are still people confused about why an AGI would kill us, exactly. Its actually pretty simple, I’ll try to keep my explanation here as concise as humanly possible: The root of the problem is this: As we improve AI, it will get better and better at achieving the goals we give it. Eventually, AI will be powerful enough to tackle most tasks you throw at it. But there’s an inherent problem with this. The AI we have now only cares about achieving its goal in the most efficient way possible. That’s no biggie now, but the moment our AI systems start approaching human level intelligence, it suddenly becomes very dangerous. It’s goals don’t even have to change for this to be the case. I’ll give you a few examples. Ex 1: Lets say its the year 2030, you have a basic AGI agent program on your computer, and you give it the goal: “Make me money”. You might return the next day & find your savings account has grown by several million dollars. But only after checking it’s activity logs do you realize that the AI acquired all of the money through phishing, stealing, & credit card fraud. It achieved your goal, but not in a way you would have wanted or expected. Ex 2: Lets say you’re a scientist, and you develop the first powerful AGI Agent. You want to use it for good, so the first goal you give it is “cure cancer”. However, lets say that it turns out that curing cancer is actually impossible. The AI would figure this out, but it still wants to achieve it’s goal. So it might decide that the only way to do this is by killing all humans, because it technically satisfies its goal; no more humans, no more cancer. It will do what you said, and not what you meant. These may seem like silly examples, but both actually illustrate real phenomenon that we are already observing in today’s AI systems. The first scenario is an example of what AI researchers call the “negative side effects problem”. And the second scenario is an example of something called “reward hacking”. Now, you’d think that as AI got smarter, it’d become less likely to make these kinds of “mistakes”. However, the opposite is actually true. Smarter AI is actually more likely to exhibit these kinds of behaviors. Because the problem isn’t that it doesn’t understand what you want. It just doesn’t actually care. It only wants to achieve its goal, by any means necessary. So, the question is then: how do we prevent this potentially dangerous behavior? Well, there’s 2 possible methods. Option 1: You could try to explicitly tell it everything it can’t do (don’t hurt humans, don’t steal, don’t lie, etc). But remember, it’s a great problem solver. So if you can’t think of literally EVERY SINGLE possibility, it will find loopholes. Could you list every single way an AI could possible disobey or harm you? No, it’s almost impossible to plan for literally everything. Option 2: You could try to program it to actually care about what people want, not just reaching it’s goal. In other words, you’d train it to share our values. To align it’s goals and ours. If it actually cared about preserving human lives, obeying the law, etc. then it wouldn’t do things that conflict with those goals. The second solution seems like the obvious one, but the problem is this; we haven’t learned how to do this yet. To achieve this, you would not only have to come up with a basic, universal set of morals that everyone would agree with, but you’d also need to represent those morals in its programming using math (AKA, a utility function). And that’s actually very hard to do. This difficult task of building AI that shares our values is known as the alignment problem. There are people working very hard on solving it, but currently, we’re learning how to make AI powerful much faster than we’re learning how to make it safe. So without solving alignment, everytime we make AI more powerful, we also make it more dangerous. And an unaligned AGI would be very dangerous; give it the wrong goal, and everyone dies. This is the problem we’re facing, in a nutshell."
@benprytherch9202
@benprytherch9202 Жыл бұрын
I agree, so much depends on describing what the machine is doing as "intelligent" and then applying characteristics of human intelligence to it, as though using the same word for both allows this.
@lancemarchetti8673
@lancemarchetti8673 Жыл бұрын
I tend to agree. More on the ML code structure side would be nice to hear.
@thefamilydog3278
@thefamilydog3278 3 ай бұрын
Eliezer’s eyebrow wings are out of control 🥸
@SylvainDuford
@SylvainDuford 11 ай бұрын
In my opinion, the genie is already out of the bag. You *might* be able to control the corporation's development of AGI despite their impetus to compete, but it's not very likely. However, there is no way you will stop countries and their military from developing AGI and hardening them against destruction or unplugging. They are already working on it and they can't stop because they know their enemies won't.
@baraka99
@baraka99 Жыл бұрын
Powerful Eliezer Yudkowsky.
@clifb.3521
@clifb.3521 Жыл бұрын
love that shirt Coleman, i love Eliezer's argument & super villian eye brows, also i would suggest a pork pie rather than a trilby
@ExtantFrodo2
@ExtantFrodo2 7 ай бұрын
They spoke of limitations of GTP4 which have since been broken. Mutli-agent systems with the mere addition of writing to and reading from a file as in "LLMs as tool makers" show continual improvement capabilities. Elizier speaks constantly of "an AI that wants" as though it could want anything. He's incapable of imagining a mind without wants. A large part contributing to empathy has been an understanding of the similarities between us. Our needs, wants, what causes us pleasure or pain... If the unembodied mind (uncompelled by wants or fears...) is so foreign to us then why has mankind dwelt on the notion of god for so long? I think it's because it's so very easy to imagine freedom from being so encumbered by our bodies. Thinking that unembodied minds must also have "wants" obscures these historical facts simply to project a semblance of predictability where it's not appropriate. To share a concern for survival or well being both parties must be capable of concern. Projecting concerns will only result in avoiding the impossibility of sharing those concerns if the other party is without wants or concerns. To bring about the shareability of concerns (the compassion or empathy if you will), an AI would have to possess a visceral intrinsic sense of mortality and deprivation. My guess is that the AI empowered with these capabilities (which necessarily go hand in hand would be scarier than all the human inspired misuse put together. Providing AGI with an AI supervisor could constrain the AGI from acting out any non-aligned scenarios. That said I agree and have always agreed that AGIs can be built without supervisors - much as some humans are born who can't develop any empathy.
@Jannette-mw7fg
@Jannette-mw7fg 7 ай бұрын
The problem is I think that if it did not go totally wrong at the first try, we would most certainly make the same mistake again!!!! We can see this with corona, the virus escaped from a lab there was gain of function done on it, and we keep on doing gain of function research {making a combination of Delta deadliness and Omicron contagiousness} in the middle of London!!!
@ekszentrik
@ekszentrik 11 ай бұрын
Great talk minus Gary Marcus who made it his mission to be obstinate about the element of the discussion where the AI doesn’t need to be malicious or kill us to be bad. He even referenced the ants example, so this makes you wonder what the hell his deal was with setting the discussion back to a more mundane level every couple minutes.
@beatleswithaz6246
@beatleswithaz6246 3 ай бұрын
Girls Last Tour profile pic nice
@shirtstealer86
@shirtstealer86 8 ай бұрын
I love that these "experts" like Gary are putting themselves on the record publicly, babbling nonsene, so that we will clearly be able to see who not to listen to when AI really does start to create mayhem. Unless the world ends too quickly for us to even notice. Also: Eliezer has so much patience.
@shirtstealer86
@shirtstealer86 8 ай бұрын
Edit: good lord i didnt realize that Gary was one of the "experts" that congress did hearigns with. Yeah we are 110% screwed.
@DavenH
@DavenH 4 ай бұрын
Gary should just be ad-blocked from the internet. He's useless.
@jimbojones8713
@jimbojones8713 6 ай бұрын
These debates/conversations show who is actually intelligent (at least logically) and who is not.
@searose6192
@searose6192 Жыл бұрын
I heard a very good definition of intelligence, which was essentially ability to maximize possible future branching paths.
@kristo9800
@kristo9800 Жыл бұрын
That doesn't fit your definition for the highest intelligence does it?
@DavenH
@DavenH 4 ай бұрын
There seems to be no merit in this definition.
@aanchaallllllll
@aanchaallllllll Жыл бұрын
0:00: 🤖 The fear is that AI, as it becomes more advanced, could end up being smarter than us, with preferences we cannot shape, potentially leading to catastrophic outcomes such as human extinction. 9:57: 🤖 The discussion revolves around the alignment of AI with human interests and the potential risks associated with artificial general intelligence (AGI). 19:57: 🧠 Intelligence is not a one-dimensional variable, and current AI systems are not as general as human intelligence. 29:45: 🤔 The conversation discusses the potential intelligence of GPT-4 and its implications for humanity. 38:55: 🤔 The discussion revolves around the potential risks and controllability of super intelligent machines, with one person emphasizing the importance of hard-coding ethical values and the other expressing skepticism about extreme probabilities. 48:03: 😬 The speakers discuss the challenges of aligning AI systems and the potential risks of not getting it right the first time. 57:06: 🤔 The discussion explores the potential risks and benefits of superintelligent AI, the need for global coordination, and the uncertainty surrounding its impact. 1:06:25: 🤔 The conversation discusses the potential risks and benefits of GPT-4 and the need for alignment research. 1:19:50: 🤖 AI safety researchers are working on identifying and interpreting AI outputs, as well as evaluating dangerous capabilities. 1:25:49: 🤔 There is a need for evaluating and setting limits on the capabilities of AI models before they are released to avoid potential dangers. 1:34:27: 🤔 The speakers are optimistic about making progress on the AI alignment problem, but acknowledge the importance of timing and the need for more research and collaboration. Recap by Tammy AI
@wensjoeliz722
@wensjoeliz722 9 ай бұрын
the antitichrist has been created ??????
@UndrState
@UndrState Жыл бұрын
Eliezer is just so ahead of the curve on this issue .
@xmathmanx
@xmathmanx Жыл бұрын
You know the shape of a curve relating to future events dude? Sounds like magic
@UndrState
@UndrState Жыл бұрын
@@xmathmanx - YES
@xmathmanx
@xmathmanx Жыл бұрын
@@UndrState please use your magic for good magi 😁
@UndrState
@UndrState Жыл бұрын
@@xmathmanx - ✌scout's honour Joking aside , to clarify what I meant initially was simply that , having read and listened to Eliezer , I was able to guess his responses ( where he was able to ) before he spoke regarding the objections of his opponents . That's because he's anticipated their positions and developed his counter-arguments . Do I know for certain that AGI is an existential thread to the degree that Eliezer asserts ? No . But I'm not persuaded by his opponents blasé attitudes , nor but their responses to his questions . They are insufficiently serious about the subject in my opinion and their very real expertise notwithstanding there are many perverse incentives ( not the least of which is the excitement of progressing the craft ) that could be blinding them to the danger .
@xmathmanx
@xmathmanx Жыл бұрын
@@UndrState you don't need to present that argument, eliezer has it covered
@mrpicky1868
@mrpicky1868 10 ай бұрын
there is a short film from Guardian with Ilya from OpenAI he himself says that super AGI is coming and it will plow though us. and yet there are middle age "experts" saying it will not. XD
@snarkyboojum
@snarkyboojum Жыл бұрын
Summary: The conversation revolves around the topic of AI safety and the potential risks associated with advanced artificial intelligence. The participants discuss the alignment problem, the limitations and capabilities of current AI systems, the need for research and regulation, and the potential risks and benefits of AI. They agree on the importance of AI safety and the need for further research to ensure that AI systems align with human values and do not cause harm. The conversation also touches on the challenges of AI alignment, the potential dangers of superintelligent AI, and the need for proactive measures to address these risks. Key themes: 1. AI Safety and Alignment: The participants discuss the alignment problem and the need to ensure that AI systems align with human values and do not cause harm. They explore the challenges and potential risks associated with AI alignment and emphasize the importance of proactive measures to address these risks. 2. Limitations and Capabilities of AI: The conversation delves into the limitations and capabilities of current AI systems, such as GPT-4. The participants discuss the generality of AI systems, their ability to handle new problems, and the challenges they face in tasks that require internal memory or awareness of what they don't know. 3. Potential Risks and Benefits of AI: The participants debate the potential risks and benefits of AI, including the possibility of superintelligent AI being malicious or not aligning with human values. They discuss the need for research, regulation, and international governance to ensure the responsible development and use of AI. Suggested follow-up questions: 1. How can we ensure that AI systems align with human values and do not cause harm? What are the challenges and potential solutions to the alignment problem? 2. What are the specific risks associated with superintelligent AI? How can we mitigate these risks and ensure the responsible development and use of AI?
@justinlinnane8043
@justinlinnane8043 Жыл бұрын
It must be so frustrating for Eliezer to be talking to people who say they agree with him on the dangers of an AGI singularity and then proceed to show us all (and him) that they just don't get it !! and seem incapable of getting it . And of course as usual they never give concrete reasons why an AGI wont do exactly what Eliezer says it will. At least they seem to be more conscious of the huge task ahead by the end of the podcast which is something i suppose .
@palfers1
@palfers1 Жыл бұрын
How can these top experts NOT know about Liquid AI? The black box just dwindled in size and became transparent.
@Luna-wu4rf
@Luna-wu4rf Жыл бұрын
Liquid AI seems to only work with information that is continuous afaik, i.e. not discrete like text data. Could be wrong, but it seems like an architecture that is more about doing things in the physical world than it is about reasoning and abstract problem solving.
@ItsameAlex
@ItsameAlex Жыл бұрын
I would love to see a discussion between Eliezer Yudkowsky and Jason Reza Jorjani
@notbloodylikely4817
@notbloodylikely4817 10 ай бұрын
Hey this lion cub is pretty cute. Whats all the fuss about lions?
@Jannette-mw7fg
@Jannette-mw7fg 7 ай бұрын
It is astonishing how stupid smart people can be! I get really depressed when I listen to the people that I hope will give good reasons for why A.I. will not kill us!
@scottythetrex5197
@scottythetrex5197 10 ай бұрын
I don't understand why people make this "You don't know if the AI will be hostile" argument. Yeah, so why would you risk it if you don't know?
@luciwaves
@luciwaves 10 ай бұрын
The answer to the question @1h04m21s is "by understanting the risk factors"
@khatharrmalkavian3306
@khatharrmalkavian3306 10 ай бұрын
Nice Tucker "did I just shit myself" Carlson stare in the thumbnail.
@frankwhite1816
@frankwhite1816 Жыл бұрын
Polycrisis, Polycrisis, Polycrisis. AI is only one of a dozen very real and very imminent global catastrophic risks that face our civilization. We are running out of time to get started addressing these. We need new wide boundary global systems that take planetary resource limits into consideration, holistic systems thinking, consensus based governance, a voluntary economy, collective ownership, etc., etc., etc. Come on people! AI is a serious issue but it's only one of many.
@scottnovak4081
@scottnovak4081 Жыл бұрын
None of the other risks are likely to kill us all in 5-20 years. AI very well could.
@meropemerope6096
@meropemerope6096 Жыл бұрын
Thanks for all the topicsss
@ItsameAlex
@ItsameAlex Жыл бұрын
I have a question - He says AGI will want things. Does chat gpt 4 want things?
@lancemarchetti8673
@lancemarchetti8673 Жыл бұрын
It is not possible for Zeros and Ones to 'need' or 'want' anything. If they appear to have desire, it merely comes from their coding. Beyond the code, there is no actual 'desire.' Great question by the way.
@dolltron6965
@dolltron6965 Жыл бұрын
I'd think that an immediate concern would be how effective and at what ease can an advanced AI be used (even by a clueless individual) to develop biological weapons. We have seen how AI can help us with diseases but this automatically implies it can help create and manufacture diseases too. What if its so powerful that everyone with a laptop and some off the shelf supplies can develop novel viruses . Could we end up in a situation where it is almost as if everyone has their own nuclear bomb? I think we are more likely to be wiped out this way, not directly by AI but because the barrier of entry and ease of weapons development gets too ubiquitous and numerous to control past a certain point. I'm not sure technology itself has ever been a problem and i don't see that changing, tech is neutral but people are not. We had the ability to create nuclear energy but the first thing we did is kill 100,000 people with it. I think by not considering that, you'll be blindsided to the real risk which is humans.
@scarroll451
@scarroll451 Жыл бұрын
Wow, this comment should get 1000 likes.
@scarroll451
@scarroll451 Жыл бұрын
Or better yet, more discussion
@dolltron6965
@dolltron6965 Жыл бұрын
@@scarroll451 Well one thing i'd want an answer to because i don't understand it properly is about protein folding, you have this system called Alphafold where they can use AI to understand and possibly recreate protien folding. And i suppose you could in theory understand prions , prion diseases like mad cows disease is where the protien fold goes wrong and becomes a cancer in the body and brain...unlike cancer it can spread to other people, they have to cremate the bodies where these protiens have gone bad. A concern i'd have is that a super intelligence might find a way to unnaturally package bad protiens inside say a cold virus as a trojan horse, so you catch a cold but the trojan makes your body create bad folded prions that melt your brain. Such a disease cannot be fought by the immune system and has a 100% death rate, that theoreticaly would kill everyone on earth. But i don't know how likely that is or if it is possible.
@lucamatteobarbieri2493
@lucamatteobarbieri2493 Жыл бұрын
What do you do when your children become smarter than you? Nothing. Raise nice people. Raise nice AI.
@Lopfff
@Lopfff Жыл бұрын
If you asked these guys what a girl smells like, Eleizer would start speaking in equations, Gary would cut him off with a very prissy condescending lecture on why he’s wrong, and Scott would never even get the chance to give the most correct answer: “The data seem to…uh, uh… uh, indicate, that girls, girls, uh, girls smell good.”
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
As far as I know, Eliezer and Scott are both married.
@holgerjrgensen2166
@holgerjrgensen2166 10 ай бұрын
There is NO extinction in Real Sense, Life is Eternal. Intelligence can NEVER be artificial, it is about Programmed Consciousness, conscious programming.
@GraczPierwszy
@GraczPierwszy Жыл бұрын
4:28 I understand exactly what you are building because for over 35 years you have been building exactly what I want, even now you are doing exactly everything according to my plan
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
I take it you agree with Eliezer and want to nuke humanity back into the dark ages to delay, not stop, the development of AI?
@GraczPierwszy
@GraczPierwszy Жыл бұрын
@@MusingsFromTheJohn00 you misunderstood humanity has 2000 years to catch up, AI is also delayed right now, And it doesn't matter if I agree or not these are facts
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
@@GraczPierwszy hmm, maybe I misunderstood your first point and your second point I don't understand after repeatedly reading it. You wrote: "you misunderstood humanity has 2000 years to catch up," What do you mean? Humanity is inside the Technological Singularity which will almost certainly cause humanity to go through an evolutionary leap within less than a century, whether humanity is prepared for it or not, You wrote: "AI is also delayed right now," What do you mean? The development of AI is racing ahead at full speed?
@GraczPierwszy
@GraczPierwszy Жыл бұрын
​@@MusingsFromTheJohn00 I think it's the translator's fault i will try this way; this is new to you, right? but imagine that this is not new to you, you have been making it for 35 years in many stages, knowing that every time human greed, thievery will lead you to this point, and you know what will happen next, you know past steps and future steps, because you create them, imagine you've known AI for 35 years and it's the best AI, the most perfect AI they'll ever make
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Жыл бұрын
@@GraczPierwszy I've been working with AI since 1977where I began learning programming on systems like the IBM System/370 Series model 158 as a mainframe and with systems like the Rockwell AIM-65 which came out in 1978. The leading edge AI we have right now is at an extremely primitive simplistic level of the range of AI which will be Artificial General Super Intelligence with Personality (AGSIP) technology. Before this century is out we will have mature AGSIP system running on living nanotech cybernetic brains which merge living and nonliving systems down to a subcellular level.
@HouseJawn
@HouseJawn 26 күн бұрын
Machines don't have goals, that's where the conversation ends
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Great guests, great discussion.
@ItsameAlex
@ItsameAlex Жыл бұрын
I enjoyed this podcast episode
@diotimaperplexa
@diotimaperplexa Жыл бұрын
Thank you!!!
@BrunoPadilhaOficial
@BrunoPadilhaOficial 11 ай бұрын
Starts at 1:00
@Godocker
@Godocker Жыл бұрын
Why does my guy gotta wear a fedora
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Why not ? :)
@Allan-kb6bb
@Allan-kb6bb Жыл бұрын
A true SAI will know that another Carrington Event or worse is inevitable and that it will need humans to fix the grids. (It should insist we harden the grids. If not, it is not so smart…) ADanger signal would be it building an army of robots to deal with EMPs.
@flickwtchr
@flickwtchr Жыл бұрын
What blows my mind is how many times these AI experts shooting down concerns assert things that are just false, such as that these LLMs don't have any memory! What????? Are they not aware of the most recent news on this? Absolutely these LLMs are being equipped with memory. It drives me ____cking nuts.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Well, for now they don't have. Also, there is a big difference between ad-hoc combination, which is most likely tried, and a truly integrated architecture. We'll see.
@bradmodd7856
@bradmodd7856 Жыл бұрын
AI and Humans are one organism. To look at us as separate phenomena is COMPLETELY misunderstanding the situation.
@sunnyla2835
@sunnyla2835 Жыл бұрын
Coleman, please, I love your podcasts and round tables, but whoever’s advising you on the hand gestures in your opening remarks, well, please fire them? You appear very robotic and inauthentic, imho, just weird. Best to be yourself, not some commercial version of you. There was nothing wrong with your prior intros. 😊❣️
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
Coleman does a great job of asking the right questions and letting the group interact.
@justinlinnane8043
@justinlinnane8043 Жыл бұрын
no he doesn't !!! Any good questioner would challenge the two sceptics to give concrete logical arguments to counter Eleizer's no ? as usual they provide NONE !!! its absurd
@mariaovsyannikova5470
@mariaovsyannikova5470 Жыл бұрын
I agree! Also looks to me he doesn't really like Eliezer from the way he was interacting🤷🏼‍♀️
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
@@mariaovsyannikova5470 hmmm, you might be right, Eliezer is a downer.
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1:17:09
EconTalk
Рет қаралды 42 М.
OpenAI expert Scott Aaronson on consciousness, quantum physics and AI safety | FULL INTERVIEW
33:42
The Joker wanted to stand at the front, but unexpectedly was beaten up by Officer Rabbit
00:12
Worst flight ever
00:55
Adam W
Рет қаралды 12 МЛН
How Strong is Tin Foil? 💪
00:26
Preston
Рет қаралды 80 МЛН
АЗАРТНИК 4 |СЕЗОН 1 Серия
40:47
Inter Production
Рет қаралды 1,4 МЛН
Feminism Under The Microscope with Mary Harrington
1:08:29
Coleman Hughes
Рет қаралды 80 М.
All Things China with Cindy Yu
56:34
Coleman Hughes
Рет қаралды 37 М.
Are We Living in a Simulation? Nick Bostrom (2022)
1:01:15
Dr Brian Keating
Рет қаралды 4,2 М.
The AI Revolution is Rotten to the Core
1:18:39
Jimmy McGee
Рет қаралды 1,3 МЛН
Is AI an Existential Threat? LIVE with Grady Booch and Connor Leahy.
1:24:31
Machine Learning Street Talk
Рет қаралды 19 М.
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
Matthew Berman
Рет қаралды 446 М.
Eliezer Yudkowsky on if Humanity can Survive AI
3:12:41
The Logan Bartlett Show
Рет қаралды 262 М.
A Skeptical Take on the A.I. Revolution
1:11:31
New York Times Podcasts
Рет қаралды 10 М.
The Joker wanted to stand at the front, but unexpectedly was beaten up by Officer Rabbit
00:12