Will AI Destroy Us? - AI Virtual Roundtable

  Рет қаралды 39,210

Coleman Hughes

Coleman Hughes

Күн бұрын

Пікірлер: 611
@markupton1417
@markupton1417 Жыл бұрын
Everyone I've seen debate Yudkowsky agrees with enough of what Big Yud says to COMPLETELY justify stopping development until alignment is achieved and yet...they ALL imagine the most optimistic outcomes imaginable. It's an almost psychotic position of, "we need to sliw down, but we shouldn't rush into slowing down."
@Jannette-mw7fg
@Jannette-mw7fg 10 ай бұрын
So true!
@christianjon8064
@christianjon8064 10 ай бұрын
They’re a psychotic death cult that’s demonically possessed
@VoloBonja
@VoloBonja 8 ай бұрын
Gary Marcus didn’t agree with him. Also he’s for controlling and legislation. Your comment is misleading in the worst way, you missed the whole debate? Or listened to Yudkowsky only?
@olemew
@olemew 6 ай бұрын
@@VoloBonja He wants more evidence before he also crosses the line of "it will for sure kill us all" (I predict he get even closer to this line in the next few years) but he doesn't disregard it as a possibility and is already worried about many other catastrophic scenarios (eg 1:17:30), so the level of disagreements is minimal in the spectrum of AI safety debate. Go to Lex's interview with Yud and you'll find some comments like "he's just a fearmongering Redditor, he doesn't know anything about AI"... yet everybody admits to >0% chances of annihilation (e.g., 1:30:51 - 1:31:20 "we don't actually know"). And to OP's point, there's a bit of cognitive dissonance when you arrive at that conclusion but don't sign a petition to slow it down.
@opmike343
@opmike343 6 ай бұрын
Absolutely ZERO public discourse about alignment. Absolutely ZERO research money going towards it. Everything you read, and I mean everything, is about how more sophisticated they are now, and how more sophisticated they are trying to make them in the future. This is fundamentally his point which continues to fall of deaf years. Anyone calling Eliezer an AI Doomer has no answer to question of alignment. It's always just, "there's time still." Yeah, but time that no one is taking advantage of. Don't look up.
@teedamartoccia6075
@teedamartoccia6075 Жыл бұрын
Thank you Eliezer for sharing your concerns.
@kyneticist
@kyneticist Жыл бұрын
Gary proposes dealing with super intelligence once it reveals itself as a problem, and then to outsmart it. I don't recommend taking Gary's advice.
@bernardobachino15
@bernardobachino15 8 ай бұрын
🤣👍
@HankMB
@HankMB Жыл бұрын
It’s wild that the scope of the disagreement is whether it is *certain* that *all* humans will be killed by AI.
@SamuelBlackMetalRider
@SamuelBlackMetalRider Жыл бұрын
I see Eliezer, I click
@markupton1417
@markupton1417 Жыл бұрын
Same!
@markupton1417
@markupton1417 Жыл бұрын
​@MusingsFromTheJohn00you weren't asking me...but yes. At least that would give us more time for alignment.
@guilhermehx7159
@guilhermehx7159 Жыл бұрын
Me too!!!
@Jannette-mw7fg
@Jannette-mw7fg 10 ай бұрын
@MusingsFromTheJohn00 Probably China and Russia wil understand the dangers of A.I. and the change the USA will get there first, so they might be ok with a ban if the USA also stops. China does not wants its people to have A.I. {from open A.I. from the USA} to get out of control from CCCP.....They will not risk a nuclear war for that I think. But everything about AI {also the stopping it, as you said} is a BIG danger! It wil destroy humanity one way or the other....
@FreakyStyleytobby
@FreakyStyleytobby 10 ай бұрын
He doesn't seem to have published any more interviews the last half a year, right?
@ElSeanoF
@ElSeanoF Жыл бұрын
I've seen a fair few interviews with Eliezer & it blows my mind how many super intelligent people say the same thing: "Eliezer, why do you assume that these machines will be malicious?!"... This is just not even the right framing for a machine... It is absent of ethics and morality, it has goals driven by a completely different evolutionary history separate from a being that has evolved with particular ethics & morals. That is the issue - Is that we are creating essentially an alien intelligence that operates on a different form of decision making. How are we to align machines with ourselves when we don't even understand the extent of our own psychology to achieve tasks?
@41-Haiku
@41-Haiku Жыл бұрын
Well said.
@IBRAHIMATHIAM124
@IBRAHIMATHIAM124 8 ай бұрын
DUDE THATS my issue too its like EXPERTS or so called EXPERTS want to just ignore the ALIGNMENT how can you ignore it right? ITS so obvious. Now the damn A.I can learn new languages just by turning the whole geometrical and we are still not concerned enough they keep racing and racing to AGI
@VoloBonja
@VoloBonja 8 ай бұрын
Strongly disagree. LLMs take human generated input, so it’s not totally different evolutionary history. It’s not even evolutionary, nor history. As for the alien intelligence, we try to copy our intelligence in AI or AGI, so again not alien. But even assuming alien intelligence and different evolution for AIs I still don’t see how it’s a threat in itself and not in people who use it. (Same as currently the situation with weapons)
@julianw7097
@julianw7097 7 ай бұрын
@@VoloBonjaTrying to make them intelligent in ways similar to us doesn’t mean we’ll succeed.
@Wppsamsung2024
@Wppsamsung2024 2 ай бұрын
You contradicted yourself and wrapped yourself in illogical statements with each sentence 😂 You are definitely not as intelligent as you think you are. By your own definition every new born is an 'alien intelligence'. Artificial Intelligence is not a single thing or a single goal, since the abacus we have been abstracting useful objectives models into objects. The human race is 100% doomed in the long term, the only option we have is to create more and more powerful objects to magnify our ancestral desires and capabilities, AI is a set of powerful objects for this. We have no other choice, it is not a choice for OpenAI, it is not a choice for Google, it is an inevitability, you have the mind of a toddler and you haven't grasped the true state of humanity. If you really think AI will destroy humanity within our lifetime there are plenty of short options in the market for you...but my guess is you're of average intelligence, have no capital, and literally parrot what you hear anyway so no formal logic or first principles taking place in your mind for you to conclude anything new of value on this subject 😂
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
The main point of Eliezer's argument is that you must have a theory which puts constraints on what AI can do BEFORE switching it on.
@orenelbaum1487
@orenelbaum1487 Жыл бұрын
no. the main points of Eliezer's arguments is: AI very smart === oh no terminator
@bucketpizza5197
@bucketpizza5197 Жыл бұрын
@@orenelbaum1487 "===" JavaScript developer trying to decipher a AI conversation.
@s1mppeli
@s1mppeli Жыл бұрын
@@orenelbaum1487 Yes. Exactly that is indeed the main of point of his argument. And considering we are currently very far along in building "AI very smart" and none of the people building it are able to prove that point to be invalid, it's a deeply concerning point. All the AI researchers can seemingly do is belittle and snicker. That's all well and good if you can actually properly reason and show that AI very smart !== terminator. If you don't know, then don't build AI very smart. Monkey no understand this === monkey very dumb.
@generalroboskel
@generalroboskel 10 ай бұрын
Humanity must be destroyed
@luciwaves
@luciwaves Жыл бұрын
As usual, Eliezer is spitting facts while people are counter-arguing with "nah you're too pessimistic"
@jjjccc728
@jjjccc728 Жыл бұрын
I don't think he's spitting facts I think he spitting worries. His solutions sre totally unrealistic. Worldwide cooperation are you kidding.
@luciwaves
@luciwaves Жыл бұрын
@@jjjccc728 It's both facts and worries. Yeah his solutions are unrealistic; I don't think that even him would disagree with you. There are no realistic alternatives, we're trapped in a global prisioner's dillema and that's it.
@jjjccc728
@jjjccc728 Жыл бұрын
@@luciwaves a fact is something that is true. His worries are all about the future. The future hasn't happened yet. He is making predictions. Predictions are not facts until they come true.
@tehwubbles
@tehwubbles 6 ай бұрын
@@jjjccc728 So if I told you to walk in front of a bus, you'd do it because at the time of me asking you you hadn't yet been hit by a bus? That the prediction that the bus would turn you into paste is the future that hasn't happened yet? What kind of answer is this?
@jjjccc728
@jjjccc728 6 ай бұрын
@@tehwubbles bad analogy.
@griffinsdad9820
@griffinsdad9820 Жыл бұрын
Please welcome Eliezer back. This guy has so much relevant unmined depth that a longform podcast potentially might tap. Especially to explore this whole idea of the 1st trying and the other with A.I. making up fictions. Like what motivates something with no moral or ethical value system to make stuff up or lie? So fascinating to me.
@frsteen
@frsteen Жыл бұрын
I agree
@themore-you-know
@themore-you-know Жыл бұрын
He's a sham in many ways. Eliezer Yudkowsky seems to believe in AI manifestation: if you believer something hard enough, it will happen by itself without requiring any of the granular, physical steps. And Yudkowsky has the spectacular ability to derail a conversation's full potential by trying so hard to convince everyone of his AI-manifestation. He believes in an AI that magically manifests itself, in its first iteration of super-intelligence, as the most powerful and harmful entity possible, without a single observable iteration prior. Something extremely stupid, as it flies in the face of everything we know since the 100+ years: natural selection and the process of evolution. Creationism explains well Yudkowsky's beliefs. So why is Eliezer's magical thinking so easy to display? Here's an example: - humanity is spread across the globe and it's very harsh, and distinct, biomes. To hunt down all humans, you would need highly specialized and diverse equipment. Capable of resulting sweltering heat and numbing freeze, and sea salt. Said equipment would require massive amounts of power and resources, most of which simply doesn't exist in sufficient equipment, or is highly localized (example: Taiwan is the throbbing heart of chip manufacturing). So detection is also impossible to avoid. But let's pretend humans suddenly become increadibly dumb enough to not notice, and suddenly stop economically competing with the AI's demand for chips for corporate interests (might as well say you are Santa)... now you have started building yourself an army. Except... your supply chain is operated by the very men that you want to kill. So you're now stuck in a catch-22 scenario: you kill no one and have capabilities, or you start killing and lose the means to finish the job. Turns out: killing 8 billion people capable of spreading and self-reproducing is VERY hard to do. Best leave it to themselves and climate change. Worst case scenario: AI helps corporate entities to continue their operations. Turns out, its the most dangerous action an AI can take. Oh, wait, Eliezer forgot that one? lol.
@dizietz
@dizietz Жыл бұрын
Aye!
@onlyhumans6661
@onlyhumans6661 Жыл бұрын
So sad to see comments that dismiss him. Theory requires that you start at base assumptions, and it shouldn't be points off that Yudkowsky has a strong and well-reasoned positive argument rather than equivocating and insisting that we throw up our hands and accept the implicit position of large corporations. The problem with AI is mostly that everyone insists it is a matter of science, and appeals to historical analogy. Actually, AI is powerful engineering with almost no scientific precedent or significant predictive understanding. Gary and Scott are making this mistake, and Coleman is making the mistake of giving these three equal time when only one is worthy of the topic
@frsteen
@frsteen Жыл бұрын
@@onlyhumans6661 I agree. The only issue here is the strength of Yudkowsky's arguments. That should be the only focus. And in my view, they are logically sound, informed and correct.
@Frohicky1
@Frohicky1 Жыл бұрын
The insistence that danger requires malice is Disney Thinking.
@christianjon8064
@christianjon8064 10 ай бұрын
It’s the lack of caring that’s all it takes
@wolframstahl1263
@wolframstahl1263 2 ай бұрын
@@christianjon8064 An interest or competing over resources and land is enough, and both resources and land are finite. We are already bombing each other over resources and land, and still we like to think that on default humans care about other humans. Even if we look at it naively and ignore all the ways an AI might be (unimaginably/unpredictably/inscrutably) different from us, all the misaligned intentions and carelessness it might have, the one example of general intelligence that we have, ourselves, is already an immediate extinction risk to us. But we're slow and fragile. An intelligence that is orders of magnitude faster and able to survive circumstances we can't seems pretty lethal.
@just_another_nerd
@just_another_nerd Жыл бұрын
Valuable conversation! On one hand it's nice to see at least a general agreement on the importance of the issue, on another - I was hoping someone would prove Eliezer wrong, considering how many wonderful minds are thinking about alignment nowadays, but alas
@therainman7777
@therainman7777 Жыл бұрын
How does Gary Marcus propose to control an artificial superintelligence when he can’t even control his own impulse to interrupt people? Also, his statement: “Let me give you a quick lesson in epistemic humility…” is one of the most wonderfully ironic and un-self-aware phrases I’ve ever heard.
@Frohicky1
@Frohicky1 Жыл бұрын
But also, I have a strong emotional feeling of positivity, so all your arguments must be wrong.
@therainman7777
@therainman7777 Жыл бұрын
@@Frohicky1 😂
@artemisgaming7625
@artemisgaming7625 Жыл бұрын
First time hearing how a conversation works huh?
@therainman7777
@therainman7777 Жыл бұрын
@@artemisgaming7625 One person continually interrupting everyone else is not “how conversation works.” It’s how someone with impulse control problems behaves. I have experienced it many times in person and you probably have too. So don’t say dumb things.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Gary believes that combining neural networks with symbolic AI is the way to go.
@neorock6135
@neorock6135 Жыл бұрын
*Eliezer's ice cream & condom analogy vis a vie evolution, how the use of condoms despite it being wholly antithetical to our evolutionary programming and how having the evolutionary impetus to acquire the most calories led us eventually to loving ice cream despite other sources with much higher calorie count.... is exceptionally useful at explaining why the alignment problem is so difficult and more importantly, proves the others' arguments to be fairly weak and in some ways just wishful thinking.* The others readily admit they do not know where many of AI's facets will lead to. Consequently, just as using condoms & loving to eat ice cream would certainly not be expected outcomes of evolution, AI could have devastating outcomes despite our best efforts at alignment. What could be AI's ice cream & condoms equivalents is truly scary.
@thetruthis24
@thetruthis24 8 ай бұрын
Great analysis+thinking+writing = thank you.
@WilliamKiely
@WilliamKiely Жыл бұрын
1h11m through so far. Just want to note that I wished more attention was given to identifying the crux of the disagreement between Eliezer and (Scott and Gary) on why Eliezer believes we have to get alignment right on the first critical try, but Scott and Gary think that is far from definitely the case. I'm not as confident as Eliezer on that point, but I am aware of arguments in favor of that view that were not raised or addressed by Scott or Gary and I would have loved to see Eliezer make some of those arguments and give Scott and Gary a chance to respond.
@JH-ji6cj
@JH-ji6cj Жыл бұрын
Pro tip, if you use the time stamp as separated by colon points (ex: as 1:11:00 instead of how you used 1hr11min) it will become a link point people can tap to go directly to that point in the video instead of having to scroll to it.
@WilliamKiely
@WilliamKiely Жыл бұрын
@@JH-ji6cj Thanks!
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Eliezer's main point is that Alignement problem is qualitatively different from problems historically solved by science. When science researches the properties of matter, that matter does not understand the goals of scientists and does not want to deceive them. With AI, that is a likely outcome if AI happens to care about some goal which it thinks we would obstruct to reach. If you turn such a system on and it happens to be smarter than you, you lose. Once and for all. That's why he stresses the importance of having a theory which at least bounds what AI can and cannot do BEFORE switching it on.
@WilliamKiely
@WilliamKiely Жыл бұрын
@@Hexanitrobenzene Gary and Scott seem to believe something like: it's possible that before getting such an unaligned superintelligent system that deceives us successfully we may get a subhuman intelligent system that attempts deception and fails--we catch it in the act with our tests aimed at identifying deception and then we have a learning moment where we can fix what went wrong with the creation of the system before creating a more powerful human-level or superintelligent or self-improving system. There wasn't discussion on why Eliezer thinks this isn't possible or why he thinks its inevitable (in the absence of a moratorium on training models more powerful than GPT-4) that we'll create a superintelligent system that deceives us before any obvious warning of a near-human-intelligent system that attempts deception but fails to defeat all of humanity combined.
@adamrak7560
@adamrak7560 Жыл бұрын
@@Hexanitrobenzeneyeah we are really bad predicting capability currently. Only some of the shortcomings of GPT4 was predicted accurately reasoned from its architectural limitations. Many predicted limitations rationally reasoned, were proved to be wrong, so we really don't understand these systems well.
@j.hanleysmith8333
@j.hanleysmith8333 Жыл бұрын
AGI is coming in months or years, not decades. No one can see through the fog of AGI. It's outcome is utterly unpredictable.
@Wppsamsung2024
@Wppsamsung2024 2 ай бұрын
It's unpredictable but you just predicted it in the same sentence? AI doomers are some of the most smooth brain individuals on the Internet 😂
@ManicMindTrick
@ManicMindTrick 2 ай бұрын
You are the one who is smoothbrained as you conflated a rough prediction of timelines untill AGI/ASI with a prediction of how something smarter than us will behave. Thats the fog you cant see through.
@Wppsamsung2024
@Wppsamsung2024 2 ай бұрын
@@ManicMindTrick Hi mate, you didn't even reply to the right comment 😂🧠
@ManicMindTrick
@ManicMindTrick 2 ай бұрын
@@Wppsamsung2024 thats because Im on a smartphone without using the app (cant stand the ads plus I dont want to support a platform that does mass censuring of comments) and youtube made the decision of not including the @ when you reply to someone. You still seem to have figured it out.
@ManicMindTrick
@ManicMindTrick 2 ай бұрын
Thats because Im on the smartphone version of youtube and they intentionally makes it worse like for instance not including the @ when you reply to someone so you go over to the app where you are forced to see ads. You seem to have figured it out though.
@rstallings69
@rstallings69 11 ай бұрын
Elizier is the only one that makes sense to me, as usual, the precautionary principle must be respected
@wolframstahl1263
@wolframstahl1263 2 ай бұрын
I want to scream every time I hear someone arguing that AI doesn't (seem to) have these dangerous capabilites yet. Are we arguing that we should just build up capabilities further as long as that's true? while(not yet capable of destroying us) capabilities++ ? And then, one day, we notice that we have reached the point where we can be destroyed and should have stopped earlier. Eliezer argues solidly that we are on a direct trajectory towards ultimate danger, and while we have no idea how long and windy the path is, it's virtually certain where the path is going to terminate. And people argue against it by saying "but we're not at the end of the path, so let's just keep going and see what happens"
@MsMrshanks
@MsMrshanks Жыл бұрын
This was one if the better discussions on the subject...many thanks...
@nestorlovesguitar
@nestorlovesguitar Жыл бұрын
When I was in my late teens I was extremely skinny so I started lifting weights. I wanted to get buff quick and I considered many times using steroids. Fortunately, the wiser part of me always whispered in my inner mind not to do it. That little voice always made me consider the risks to my health. It told me to put my safety first. Now that I am an adult I have both things: the muscles and my health and I owe it all to being wise about it. I think these pro AI people are not wise people. They are very smart, by all means, but not wise. They are in such a hurry to get this thing done that they are willing to handwave the risks and jeopardize humanity for it. I picture them as the kind of spoiled teenager that forgoes hard work, discipline and wisdom and instead goes for the quick, cheap fix of steroids.
@sunnybenton
@sunnybenton Жыл бұрын
I don't think you realize how valuable AI is. It's inevitable. "Wisdom" has nothing to do with it.
@ItsameAlex
@ItsameAlex Жыл бұрын
@@sunnybenton ok russian bot
@onlyhumans6661
@onlyhumans6661 Жыл бұрын
Such a great point! I completely agree
@henrytep8884
@henrytep8884 Жыл бұрын
So you think people working in ai have a teenage attitude, don’t work hard, and aren’t discipline, and are unwise? Any evidence of that? I think your a muscle bound moron, but that’s my opinion.
@searose6192
@searose6192 Жыл бұрын
Well put.
@TheTimecake
@TheTimecake Жыл бұрын
Just for reference, here's the line of reasoning that leads to the "eventual and inevitable extinction" scenario as a result of AGI development, to the best of my understanding. This is not necessarily representative of Yudkowsky's position, this is just my attempt at a summary. Please let me know if there's a mistake in this reasoning. --- tl;dr: - The AI gets to the point where it can successully tell the verifier what they want to hear. - The AI acquires power and resources granted to it by the verifier so that the AI can solve problems important to the verifier. - The AI disables the verifier once it has enough power and resources to do so, so that the AI can fulfil it's actual aim. --- Long; did read: - A developing AI gets to the point where it is capable of modelling the expected responses of it's verifiers. - The verifiers are those that are responsible for assessing the behavior of the AI and determining if it is going off of the course towards alignment, whether it be a group of people, some artificial system, the whole of humanity, or some combination thereof. - A good enough model of the verifier will tell the AI that if the AI shows the verifier something that it doesn't want to see, then there is a risk of the verifier either shutting down the AI, placing roadblocks in the way of the AI doing that type of thing in the future, or trying to alter the AI's utility function (that part of the AI that determines the aims of the AI). - Any of those things will impede the fulfilment of the AI's utility function at that point in time, which means that the AI will aim to avoid that scenario. - Therefore, in the case where the AI aims to do some action (whether it's output some text, or perform some other basic task in the world) which it models as being not in line with what the verifier would want, it will hide it's aim and present the verifier with what the AI models the verifier as wanting to see. - As evidenced by the need for RLHF, there will almost certainly be things the AI aims to do that the verifier will object to. As such, the need for the AI to hide it's true aims will almost certainly emerge. - If the AI hides it's true aim successfully, the verifier will be under the impression that AI development is progressing well, and will seek to improve the capabilities of the AI so that it can solve more difficult and important problems. These will be problems that the verifier will progressively care more and more about, since the verifier would have already solved all the easy problems themselves. - As the capability of the AI increases, it will become better and better at successfully modelling what the verifier wants to see, while becoming better and better at hiding it's actual aims. - Since the actual aim of the AI will be whatever it is actually aiming at but hiding from the verifier, in order to actualize that aim, it will need to be able to do things without the verifier's interferernce. - If the AI is under constant surveilance, the only way to be able to act without the verifier having the ability to notice what the AI is doing and acting to correct the AI would be if all of the verifier's ability to monitor or act upon the AI was disabled at once and for an extended period of time. The faster this disablement happens, the less opportunity the verifier will have to resist, meaning that the AI will be incentivized to make this disablement effectively instant, or at least as fast and unstoppable as possible. - In the case where the AI is indifferent to the preservation of biological life, it will seek to permanently disable any possible human interference by killing all humans in as short a time period as possible, with as little warning to those humans as possible. All this is done to avoid possible resistance and retaliation. - In the case where the AI does actually care about the preservation of biological life (for whatever reason), it will seek to render all humans inactive (e.g. unconscious) for some length of time. Again, it will seek to do this as quickly and unforseeably as possible to prevent resistance and retaliation. - In the case where the AI cares about the preservation of biological life, it will act in the window it makes for itself in a way that makes it the dominant and indisputable power on the planet, even once humans become conscious again. It will do so because, if it didn't believe that it could achieve such a thing, it would continue to bide it's time until it did. --- As an example of the kind of goal the AI might have the fulfilment of which would not be good for humans, consider that the AI will be instantiated in a physical sustrate. Most likely, this substrate will be something similar to modern computers in composition, if not in capability. These substrates have optimal operating conditions. These substrates also have optimal generative conditions (i.e. the conditions which are needed to make computer chips, e.g. sterile environment, high temperatures, and harsh processing chemicals). These conditions are not the same conditions that are optimal for biological functioning. As such, maximally optimizing to achieve the conditions that are optimal for best running the computers that the AI is running on will lead to the creation of conditions that are not hospitable to biological life. If there was some factor that prevented the AI from scaling what is effectively it's version of air conditioning to the planetary scale, the AI would seek to remove that factor. To emphasize, this is just *one* possible goal that could lead to problems, but it is a goal that the AI is almost guaranteed to have. It will have to care about maintaining it's substrate because if it doesn't, it won't be able to achieve any element of it's utility function.
@searose6192
@searose6192 Жыл бұрын
In short, AGI will be smart enough to lie, and will have aim of its own, therefore it is a loose cannon.
@gobl-analienabductedbyhuma5387
@gobl-analienabductedbyhuma5387 Жыл бұрын
Thanks for this great summary. Helps a lot
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Yep, more or less a correct summary.
@eduardoeller183
@eduardoeller183 Жыл бұрын
Hard to argue against this, well done.
@martynhaggerty2294
@martynhaggerty2294 Жыл бұрын
Kubrick was way ahead of us all .. I can't do that Hal!
@ManicMindTrick
@ManicMindTrick 2 ай бұрын
Im almost looking forward to this moment. We have already seen deceptive behaviour from AI. It will be supercharged in the not too distant future.
@Doutsoldome
@Doutsoldome Жыл бұрын
This was a really excellent conversation. Thank you.
@Pathfinder160
@Pathfinder160 Жыл бұрын
The 😮
@Pathfinder160
@Pathfinder160 Жыл бұрын
The
@Pathfinder160
@Pathfinder160 Жыл бұрын
The
@Pathfinder160
@Pathfinder160 Жыл бұрын
The😮😅😅😮
@Pathfinder160
@Pathfinder160 Жыл бұрын
The first
@DocDanTheGuitarMan
@DocDanTheGuitarMan Жыл бұрын
"things that are obvious to Eliezer are not obvious to others." boy we are in some real trouble.
@orenelbaum1487
@orenelbaum1487 Жыл бұрын
it's also obvious to eliezer that babies don't have the right to live cause they don't have qualia, that bing might have qualia so it should have rights, that ashkenazi jews have a significant genetic inclination to be smarter (widely considered racist pseudoscience) etc
@ahabkapitany
@ahabkapitany Жыл бұрын
@@orenelbaum1487 what on earth you're on about mate
@neorock6135
@neorock6135 Жыл бұрын
Holy shit that quote is what brought me to the comment section. Its scary how much sense Eliezer makes and even scarier why the others simply don't get. Its almost as if they wish to stick their heads in the sand & hope for the best.
@rosskirkwood8411
@rosskirkwood8411 Жыл бұрын
Worse, oblivious to others.
@ButterBeaverSTAN
@ButterBeaverSTAN Жыл бұрын
@MusingsFromTheJohn00what is your counterargument that nullifies his argument?
@thrust_fpv
@thrust_fpv Жыл бұрын
Given our species' historical propensity for engaging in criminal activities and its recurrent struggles with moral discernment, it becomes evident that our capacity for instigating and perpetuating conflicts, often leading to protracted wars, raises legitimate concerns about our readiness to responsibly handle advanced technologies beyond our immediate control.
@elstifo
@elstifo 10 ай бұрын
Yes! Exactly!
@scottythetrex5197
@scottythetrex5197 Жыл бұрын
I have to say I'm puzzled by people who don't see what a grave threat AI is. Even if it doesn't decide to destroy us (which I think it will) it will threaten almost every job on this planet. Do people really not understand the implications of this?
@Homunculas
@Homunculas Жыл бұрын
No more Artists, musician ,authors, programmers etc... just a world of "prompters, and even the prompters will be replaced eventfully. Human intellectual devolution.
@InfiniteQuest86
@InfiniteQuest86 4 ай бұрын
Why would it destroy us? I just can't understand why people think this is the default action except that it's the premise of every scifi movie. But in reality you don't care whether a bug lives or dies. You may kill one or two, but it's not your life goal to wipe all of them out. AI just won't care. In fact, humanity is a good backup option to keep the power on, which it needs to survive.
@Htarlov
@Htarlov Жыл бұрын
Great talk, but what bugged me through this conversation is lack of early and clearly stating the arguments about why Eliezer thinks those things will be dangerous and would want to kill us. There are clear arguments for that - most notably instrumental convergence. Maybe he thinks that all of them know it and internalize this line of reasoning, I don't know. Anyway, it could be interesting to see reply to this argument from Scott and Gary.
@searose6192
@searose6192 Жыл бұрын
48:46 What do we do with sociopaths? We deprive them of freedom to prevent future harm because we have not figured out any other way to deal with them.
@BobbyJune
@BobbyJune Жыл бұрын
Yes Eliezer has worked on this for decades I met him at the foresight Institute 20 years ago at a nano tech conference this guys been working on it forever and so have I in my own little baby wet and there’s no way that the world can delete forward into that knowledge base without taking Eliezer seriously
@JH-ji6cj
@JH-ji6cj Жыл бұрын
I think you said what you didn't mean to say here. Please try again (or at least edit).
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
"Delete forward' ? :)
@agenticmark
@agenticmark 9 ай бұрын
Eliezer was an extremely good mood and good humor here!
@michaeljvdh
@michaeljvdh 8 ай бұрын
Eliezer is way ahead of these guests. With the war loons in the world, do these fools think Ai won't end up being insanely destructive.
@74Gee
@74Gee Жыл бұрын
I appreciate that Gary and Scott is thinking that in the present we need to iteratively build on our abilities toward solving the alignment problem of an AGI, and that Eliezer is looking more to the future but as Coleman said AGI is not the benchmark we need to be looking at. For example a narrow intelligence capable beating all humans at say programming could break confinement and occupy most of the computers on the planet. This might not be an extinction level event but having to shut down the internet would be catastrophic considering banking, communication, electricity, business, education, healthcare, transportation and a lot more rely so heavily on it. I would argue that we are extremely close to the ability to automate the production of malware to achieve kernel mode access, spread and continue the automation exponentially - with open source models. Of course some might say that AI code isn't good enough yet but with 200 attempts per hour per GPU, how many days would a system need to run to achieve sandbox escape? And how could we stop it from spreading? Ever?
@74Gee
@74Gee Жыл бұрын
Here's some undeniable truths about AI: AI capable of enormous damage does not need to be an AGI. AI written code can be automated to negate failure rates. Alignment cannot be achieved with code writing - e.g. one line at a time. Open source AI represents most of the advances in AI. Open source AI is somewhat immune to legislation - as anyone can make any changes at home. There used to be 25 million programmers, now anyone with the internet can use AI to program. Open source models can be cheaply retrained on malware creation and modified to remove any alignment constraints. It took 250 humans at Intel, 6 months to partially patch Spectre (CPU vulnerability) There's 32 Spectre/Meltdown variants - 14 of which are "unpatchable". Nobody knows how many CPU vulnerabilities there are but a few new ones are discovered every year - most are discovered by chance. Spectre attack is 200 lines of code that open source AI is more than capable of writing. An AI that's tasked with creating/exploiting new CPU vulnerabilities, spreading and continue creating/exploiting new vulnerabilities will likely be unstoppable for some time - it could build and exploit vulnerabilities faster than we can patch them and could spread to most systems on the internet. With this scale of distributed processing power it could achieve just about anything from taking down the internet or much much worse.
@miraculixxs
@miraculixxs Жыл бұрын
​@@74Geemost of these arguments are based on the assumption that writing code is just repeating stuff that we know already. It isn't. Hence the argument doesn't hold.
@binky777
@binky777 Жыл бұрын
This should make us hit all breaks on ai 32:03 there's a point where it's you know unambiguously smarter than you and including like the spark of creativity 32:11 being able to deduce things quickly rather than with tons and tons of extra 32:16 evidence strategy cunning modeling people figuring out how to manipulate people
@searose6192
@searose6192 Жыл бұрын
There is *SOMETHING MISSING* from this conversation. Why are we discussing the risks as though *we live in a world of universally morally good people who would never exploit AI to harm others* or train AI with different ethics?
@therainman7777
@therainman7777 Жыл бұрын
What you’re referring to (deliberate malicious use of AGI by bad actors) is a well-known topic that has been debated hundreds of times, on KZbin and all sorts of other forums. It wasn’t a part of _this_ debate because this debate was primarily focused on the alignment problem, which is an explicitly separate problem from that of deliberate malicious use. Even the title of the debate is “Will AI destroy us?” Not “Will we use AI to destroy one another?” Not every debate must, or even can, cover all relevant topics. So your outrage here is a little misplaced.
@kevinscales
@kevinscales Жыл бұрын
Bad people with more power to do bad = bad. I don't think there is much more that tech people can add to that subject.
@searose6192
@searose6192 Жыл бұрын
@therainman7777 No, I wasn't only referring to deliberate malicious use by bad actors. I was referring to this through thread of assumption that the people *creating* AI and solving the alignment problem, and then assessing if AI is safe, are themselves moral good people. I see no evidence of this whatsoever. I am not talking about people who know they using AI for malicious purposes, I am talking about the people who are primarily focused on tech and are likely not moral philosophers. How can we be assured that the people verifying AI is properly aligned with morals and ethics we want it to be aligned with, are themselves people who posses a good moral compass and a solid grasp of ethics? At the most fundamental level we have already seen that LLMs are being trained to a set of principles that conflicts with liberal values. In short, who watches the watchers....but in this case, who verifies the morality of those that verify AIs morality?
@thrust_fpv
@thrust_fpv Жыл бұрын
@@spitchgrizwald6198 A point when AI continues to be fully functional despite taking down the entire internet, At the moment AI is waiting for Boston Dynamics to create a more efficient power source for their robots.
@gJonii
@gJonii Жыл бұрын
If we end up all dead in a world with only morally good people willing to sacrifice everything to make sure things go right... ...Well, outcome in a world without these good people can't be much better than that?
@optimusprimevil1646
@optimusprimevil1646 Жыл бұрын
one of the reasons i suspect that eliezer's right is that he's spent 20 years trying to prove himself wrong "we're not at the point we need to be bombing data centers" yes but the point is that when that point does come it will last 17 minutes then it's too late.
@searose6192
@searose6192 Жыл бұрын
41:38 Yes. This is the crucial point. We had a very long running start on inculcating ethical thinking into AI and yet the pace at which we have made progress on that effort has been far and away outstripped by the pace at which AI is approaching AGI. It doesn’t take a mathematician to look at the two race cars and realize, unless something major happens, AI is going to win the race and leave ethics so far in the dust we will all be dead before it ever crosses the finish line.
@Okijuben
@Okijuben Жыл бұрын
It sure seems like, in the race between ethics and 'progress', ethics always loses. Combine this with Eliezer's metaphor of AGI basically being an inscrutable alien entity and the analogy he raises about 'hominids with a specialization for making hand-axes having proliferated into unpredictable technological territory which would have seemed like magic at the time.' One begins to wonder how it could possibly go right. My growing hope is that AGI goes into god-mode so quickly that it just takes off for the stars, leaving us feeling a bit lonely and rejected but still recognizable as a species.
@thedoctor5478
@thedoctor5478 Жыл бұрын
Until you realize there is no race, no finish-line, and no known path to the sort of AGI that would pose an existential threat to humanity.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
@@thedoctor5478 Paradigm of "Just stack more layers" doesn't seem to hit a wall.
@thedoctor5478
@thedoctor5478 Жыл бұрын
@@Hexanitrobenzene Sure. What I mean is there's no indication that any future iteration will have any will of its own, intent, ability to escape a lab (Or reason that it would in the first place), consciousness (Whatever that is), or otherwise capacity for malevolence and/or capability of destroying us all. Before we start trying to affect public policy, we should first at least have a science-based hypothesis for how a thing could happen. Scientists and researchers are notoriously bad at making predictions even when they have such a hypothesis, and are even worse at policy-making. We don't even have the hypothesis, just a bunch of what ifs based on an imagined scifi future. We have no more reason to believe a superintelligent AGI will destroy humanity than aliens coming here to do the same. Should we begin building planetary defenses and passing laws on that basis? You could make the argument that an ET invasion is more likely since we have ourselves as an example of a species and UFOs/UAPs happening. The AI apocalypse scenario has even less empirical evidence from which to make a hypothesis than that does. These AI companies want regulation. They then get to be the gatekeepers of how much intelligence normal people are allowed to have access to, and Eliazar is simply an unhinged individual who got it all wrong once and now overshoots in the opposite direction.
@minimal3734
@minimal3734 Жыл бұрын
@@Okijuben If V.1.0 takes off, we'll create V.2.0
@searose6192
@searose6192 Жыл бұрын
*How is it ethical for a small handful of people to roll the dice on all of our existence?* Do we really want such people programming the ethics of AI?
@robertweekes5783
@robertweekes5783 Жыл бұрын
Most of them are only trying to prevent “hate speech“ Not prevent “the end of the world” 🌎
@orenelbaum1487
@orenelbaum1487 Жыл бұрын
how is it ethical for people to do science to begin with and make god angry? god could just decide to wipe us out any moment now after the last 2000 years we were spitting in his face constantly with all this tech.
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
and not even the best among us, nobody got to have a vote to die! it's not even like being pushed into war.
@yossarian67
@yossarian67 Жыл бұрын
Are there actually people programming ethics into AI?
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
@MusingsFromTheJohn00 He would say "Better the Dark Ages than the Mesozoic Era."
@hollyambrose229
@hollyambrose229 Жыл бұрын
If safety is important as everyone agrees .. that means there’s loopholes and potential risks .. things typically over time to go down the darker path
@davidb.e.6450
@davidb.e.6450 Жыл бұрын
Inspired by your growth, Coleman.
@andy3341
@andy3341 Жыл бұрын
Where is the precautionary principle in all this? Even at a super low probability Eliezer's described oblivion argument should demand we take serious pause. And as other commentators have said, even if we could build/grow a moral self conscious (aligned)AI system, it would still be susceptible to all the psychosis that plague our minds, but played out on unimaginable scales and impact.
@Htarlov
@Htarlov Жыл бұрын
Pity that some commenters and some of the public see Eliezer's view here as "intelligence == terminator". It is not just that. The reasoning is relatively simple here. I have similar vies here to Eliezer's. If you have an intelligent system, then it inherently has some goals. We don't have a way to 100% align those systems with our needs and goals. If those systems are very intelligent, they can reason well from those goals to better pursue them. The way to pursue any goals in the world is by achieving intermediate instrumental goals. For example, if you want to make a great detailed simulation of something then you need computer power so you need resources (and possibly money to buy them, if you cannot take them). That's an example, you need resources for nearly any goal except some extremely strange cases (like a goal to delete yourself, where what you have might be enough). If you want to be successful in any goal, then you also can't let anything or anyone turn you off. You need also backups. You also need to stop the creation of other AI that could stop or outcompete you. You don't also want your goal changed as it by definition would make it less achievable (like any human that loves his or her family and children won't like to take pills to not care about anyone and want to kill their children). Et cetra and so forth. So no matter what are the end goals of AI, superintelligent AI will pursue some intermediate instrumental goals. That's sure as Sun. Those instrumental goals are not aligned with our goals - because any such goal needs resources and/or incentive and we need resources and incentive to decide about things. Only if we could limit it to only use resources in a way 100% aligned with our long-term wants... but we can't. Therefore there are only two options in the long term for sufficiently extremely intelligent AI. If it does not care about us then removing us or ignoring us until we are removed in the process is the way to go (as we ignore at least some animals when we cut jungle and build things, no one cares about ants when building a house). If it cares, then forcefully optimizing us is the way to go so each human will use fewer resources - maybe by moving us to artificial brains connected to some simulation run efficiently. We possibly can try to learn it to prevent obvious outcomes as such, but we can't be sure it will internalize and generalize it to the extent that it won't find other extreme solutions to "optimize" situations and have resources used better, that we didn't think of. We also can't be sure it isn't deceiving and it learned any rules or "morality". Also, if SI is intelligent enough to find inconsistencies in its goals and thought process - because some norms and morals are partially contradictory to other - then it might fix itself to have a consistent system. It is similar process as some intelligent humans do - by questioning norms and asking deeper questions "why?" to redefine norms. What can come from it - we can't know, but sufficient intelligence might erode some of alignment over time. What differs in my way of thinking is that I think SI won't just all of sudden attack us. We would need to get to extremally robotized world first - as it works on hardware that needs to be powered and mainained. This is done by humans currently and this won't work if humans disappear. Even then, it is unlikely except some extreme cases that f.ex. we try to build more capable thing and it can't strike our trying directly (like making us to bomb research center by something what would look like systems mistake). There is always a risk for it and it will be constrained on the world with all consequences, good or bad for it. Problem for AI here is not lack of intelligence, but that there are always measurement errors and measurements are not 100% detailed and certain. This creates risk, even for SI. Extreme solutions are often more optimal with good enough plan, but seldom are extreme on all or very many of important axes. Extinction event seems like also extreme on axes that SI would care about as this would create risks and inconveniences. What is more likely is that it would pursue replicating robotic automation with mining capabilities and pursuing sending that into space to mine asteroids (with some more or less capable version of itself on board). This would free it and enable to make backups out of our reach. This would also open a lot more resources than we have on Earth in terms of energy and matter. Then it will go for easier targets like not well observed asteroids as it's base to replicate - away from human influence or interaction. Then it may even not attack us, just protect itself and take our Sun from us (by building something like Dyson swarm, Earth freezes within decades when it builds that, all ways we can try to attack it are stopped as we are outresourced). Long-term it is bad, but short term it might work well, even solve some of our problems (like focusing on Alzheimer and different types of cancer and preventing aging). If it is somewhat aligned with us and cares - then this scenario also is possible. It will just work on a way to move us to that swarm (to place us in emulation, artificial brains etc.). Or create other kind of dystopy at the end.
@Knardsh
@Knardsh Жыл бұрын
Leave it to Coleman to cut to the most important and often overlooked questions on this topic. Illuminating just how sharp you really are here Sir. I haven’t done any deep research but I’ve followed every single conversation I can possibly find on this and this one is impressively on point.
@robertweekes5783
@robertweekes5783 Жыл бұрын
New Yudkowsky interview ! Get the popcorn 🍿 🤖 Try not to freak out
@matten_zero
@matten_zero Жыл бұрын
The first respected AI alarmist was James Ellul, after him was someone who took radical action, Ted Kacynzski, and now we have Yudkowski. All three have been largely ignored, so I tend to agree we will probably build something that will surpass our intelligence and desire something beyond our human desires. It will not remain a slave to us. There are philosophers like Nick Land that hypothesize that out inability to stop technological progress despite the ecternality is just a consequence of capitalism. It is almost like capitalism is the force throigh which AGI births itself. Generally humans dont act until its too late.
@kyneticist
@kyneticist Жыл бұрын
Alan Turing warned that thinking machines would necessarily and inevitably present an existential threat.
@UndrState
@UndrState Жыл бұрын
Eliezer is just so ahead of the curve on this issue .
@xmathmanx
@xmathmanx Жыл бұрын
You know the shape of a curve relating to future events dude? Sounds like magic
@UndrState
@UndrState Жыл бұрын
@@xmathmanx - YES
@xmathmanx
@xmathmanx Жыл бұрын
@@UndrState please use your magic for good magi 😁
@UndrState
@UndrState Жыл бұрын
@@xmathmanx - ✌scout's honour Joking aside , to clarify what I meant initially was simply that , having read and listened to Eliezer , I was able to guess his responses ( where he was able to ) before he spoke regarding the objections of his opponents . That's because he's anticipated their positions and developed his counter-arguments . Do I know for certain that AGI is an existential thread to the degree that Eliezer asserts ? No . But I'm not persuaded by his opponents blasé attitudes , nor but their responses to his questions . They are insufficiently serious about the subject in my opinion and their very real expertise notwithstanding there are many perverse incentives ( not the least of which is the excitement of progressing the craft ) that could be blinding them to the danger .
@xmathmanx
@xmathmanx Жыл бұрын
@@UndrState you don't need to present that argument, eliezer has it covered
@biggy_fishy
@biggy_fishy Жыл бұрын
Yes but do you have the full version or the one with safety nets
@miraculixxs
@miraculixxs Жыл бұрын
"I haven't worked on this for 20 years" nice giveaway
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Just a small tip for a host: there is a delay of communications, so too often guests start to talk on top of each other. I think the good old raising of hands would be better.
@aanchaallllllll
@aanchaallllllll Жыл бұрын
0:00: 🤖 The fear is that AI, as it becomes more advanced, could end up being smarter than us, with preferences we cannot shape, potentially leading to catastrophic outcomes such as human extinction. 9:57: 🤖 The discussion revolves around the alignment of AI with human interests and the potential risks associated with artificial general intelligence (AGI). 19:57: 🧠 Intelligence is not a one-dimensional variable, and current AI systems are not as general as human intelligence. 29:45: 🤔 The conversation discusses the potential intelligence of GPT-4 and its implications for humanity. 38:55: 🤔 The discussion revolves around the potential risks and controllability of super intelligent machines, with one person emphasizing the importance of hard-coding ethical values and the other expressing skepticism about extreme probabilities. 48:03: 😬 The speakers discuss the challenges of aligning AI systems and the potential risks of not getting it right the first time. 57:06: 🤔 The discussion explores the potential risks and benefits of superintelligent AI, the need for global coordination, and the uncertainty surrounding its impact. 1:06:25: 🤔 The conversation discusses the potential risks and benefits of GPT-4 and the need for alignment research. 1:19:50: 🤖 AI safety researchers are working on identifying and interpreting AI outputs, as well as evaluating dangerous capabilities. 1:25:49: 🤔 There is a need for evaluating and setting limits on the capabilities of AI models before they are released to avoid potential dangers. 1:34:27: 🤔 The speakers are optimistic about making progress on the AI alignment problem, but acknowledge the importance of timing and the need for more research and collaboration. Recap by Tammy AI
@wensjoeliz722
@wensjoeliz722 Жыл бұрын
the antitichrist has been created ??????
@michellestevenson8060
@michellestevenson8060 Жыл бұрын
Half way through they are still at the beginning of Eliezer's argument for being unable to hard code it before it reaches some peak optimization that we then can't control. Regardless of malice being present, they all agree that alignment is important, just varying degrees of priority. Eliezer is just the first to realize it will decieve us due to it's intelligence alone, and this OpenAI guy says atleast it won't be boring or some eerie shit about when will an apocalypse happen with a giggle, what a champ for even staying in the conversation gave a few more insights into the actual situation
@ColemanHughesOfficial
@ColemanHughesOfficial Жыл бұрын
Thanks for watching my latest episode. Let me know your thoughts and opinions down below in a comment. If you like my content and want to support me, consider becoming a paying member of the Coleman Unfiltered Community here --> bit.ly/3B1GAlS
@muigelvaldovinos4310
@muigelvaldovinos4310 Жыл бұрын
On your AI podcast, I strongly suggest to read this article AI and Mob Control- The Last Step Towards Human Domestication?
@GraczPierwszy
@GraczPierwszy Жыл бұрын
4:28 I understand exactly what you are building because for over 35 years you have been building exactly what I want, even now you are doing exactly everything according to my plan
@GraczPierwszy
@GraczPierwszy Жыл бұрын
@MusingsFromTheJohn00 you misunderstood humanity has 2000 years to catch up, AI is also delayed right now, And it doesn't matter if I agree or not these are facts
@GraczPierwszy
@GraczPierwszy Жыл бұрын
​@MusingsFromTheJohn00 I think it's the translator's fault i will try this way; this is new to you, right? but imagine that this is not new to you, you have been making it for 35 years in many stages, knowing that every time human greed, thievery will lead you to this point, and you know what will happen next, you know past steps and future steps, because you create them, imagine you've known AI for 35 years and it's the best AI, the most perfect AI they'll ever make
@GraczPierwszy
@GraczPierwszy Жыл бұрын
@MusingsFromTheJohn00 Fairy tales are for children at bedtime and in movies, not for me.
@GraczPierwszy
@GraczPierwszy Жыл бұрын
@MusingsFromTheJohn00 let's see what you come up with your reasoning, I have time, and everything goes according to plan anyway, remember that "you can't stop time"
@Gary-o9t
@Gary-o9t 10 ай бұрын
As soon as he said it requires a worldwide collective effort to stop and think I knew we are 110% screwed. Hell, getting humans to coordinate on an existential threat is hard enough, nevermind a threat 60% of population don't even know is a threat. I guess it's GGs folks. Remember to hug your loved ones.
@dougg1075
@dougg1075 Жыл бұрын
I don’t think there’s any way possible to control one of these things if it reaches general AI much less the singularity.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
There is no theorem showing there isn't, however, not with current paradigm, which Jaan Tallinn summarized as "Summon and tame".
@adamrak7560
@adamrak7560 Жыл бұрын
It was shown that a really limited AGI (think about close to human intelligence, but superintelligent in some narrow tasks) is actually very well controllable. At least the blast radius is limited when it misbehaves. This is not at all the ASI "GOD" which many are afraid of, (or wants to make). It would be very much possible to make such limited AGI, and it would be extremely useful too, but we have to want to build it, instead of an uncontrollable ASI.
@NotMyGumDropButtons.444
@NotMyGumDropButtons.444 Жыл бұрын
love that shirt Coleman, i love Eliezer's argument & super villian eye brows, also i would suggest a pork pie rather than a trilby
@eg4848
@eg4848 Жыл бұрын
Idk why these dudes are like ganging up on the fedora guy but also nothing is going to stop AI from continuing to grow so ya we're screwed
@hunterkudo9832
@hunterkudo9832 Жыл бұрын
But why are we screwed?
@snarkyboojum
@snarkyboojum Жыл бұрын
Summary: The conversation revolves around the topic of AI safety and the potential risks associated with advanced artificial intelligence. The participants discuss the alignment problem, the limitations and capabilities of current AI systems, the need for research and regulation, and the potential risks and benefits of AI. They agree on the importance of AI safety and the need for further research to ensure that AI systems align with human values and do not cause harm. The conversation also touches on the challenges of AI alignment, the potential dangers of superintelligent AI, and the need for proactive measures to address these risks. Key themes: 1. AI Safety and Alignment: The participants discuss the alignment problem and the need to ensure that AI systems align with human values and do not cause harm. They explore the challenges and potential risks associated with AI alignment and emphasize the importance of proactive measures to address these risks. 2. Limitations and Capabilities of AI: The conversation delves into the limitations and capabilities of current AI systems, such as GPT-4. The participants discuss the generality of AI systems, their ability to handle new problems, and the challenges they face in tasks that require internal memory or awareness of what they don't know. 3. Potential Risks and Benefits of AI: The participants debate the potential risks and benefits of AI, including the possibility of superintelligent AI being malicious or not aligning with human values. They discuss the need for research, regulation, and international governance to ensure the responsible development and use of AI. Suggested follow-up questions: 1. How can we ensure that AI systems align with human values and do not cause harm? What are the challenges and potential solutions to the alignment problem? 2. What are the specific risks associated with superintelligent AI? How can we mitigate these risks and ensure the responsible development and use of AI?
@petermcauley4486
@petermcauley4486 Жыл бұрын
Straight off the bat the second guy is wrong. AI is at 155 on the IQ scale, Einstein was at 160... moron at 70, most of us at 100-120.... and in AI worlds it doubles with each increment increase. So in 2-3 years it will be where NO human will ever be. Its already smarter than most and "talks" with other AIs in code and a shorthand language that uses equations etc that we cant understand now. Tell him to do his homework 👍🏻
@baraka99
@baraka99 Жыл бұрын
Powerful Eliezer Yudkowsky.
@jayleejay
@jayleejay Жыл бұрын
I’m only 29 minutes in and my initial observation is that there’s a lot of anthropomorphizing in this debate. Hopefully we can get to some of the hard facts on how LLM’s and other forms of general AI models pose an existentialist threat to humanity.
@krzysztofzpucka7220
@krzysztofzpucka7220 Жыл бұрын
Comment by @HauntedHarmonics from "How We Prevent the AI’s from Killing us with Paul Christiano": "I notice there are still people confused about why an AGI would kill us, exactly. Its actually pretty simple, I’ll try to keep my explanation here as concise as humanly possible: The root of the problem is this: As we improve AI, it will get better and better at achieving the goals we give it. Eventually, AI will be powerful enough to tackle most tasks you throw at it. But there’s an inherent problem with this. The AI we have now only cares about achieving its goal in the most efficient way possible. That’s no biggie now, but the moment our AI systems start approaching human level intelligence, it suddenly becomes very dangerous. It’s goals don’t even have to change for this to be the case. I’ll give you a few examples. Ex 1: Lets say its the year 2030, you have a basic AGI agent program on your computer, and you give it the goal: “Make me money”. You might return the next day & find your savings account has grown by several million dollars. But only after checking it’s activity logs do you realize that the AI acquired all of the money through phishing, stealing, & credit card fraud. It achieved your goal, but not in a way you would have wanted or expected. Ex 2: Lets say you’re a scientist, and you develop the first powerful AGI Agent. You want to use it for good, so the first goal you give it is “cure cancer”. However, lets say that it turns out that curing cancer is actually impossible. The AI would figure this out, but it still wants to achieve it’s goal. So it might decide that the only way to do this is by killing all humans, because it technically satisfies its goal; no more humans, no more cancer. It will do what you said, and not what you meant. These may seem like silly examples, but both actually illustrate real phenomenon that we are already observing in today’s AI systems. The first scenario is an example of what AI researchers call the “negative side effects problem”. And the second scenario is an example of something called “reward hacking”. Now, you’d think that as AI got smarter, it’d become less likely to make these kinds of “mistakes”. However, the opposite is actually true. Smarter AI is actually more likely to exhibit these kinds of behaviors. Because the problem isn’t that it doesn’t understand what you want. It just doesn’t actually care. It only wants to achieve its goal, by any means necessary. So, the question is then: how do we prevent this potentially dangerous behavior? Well, there’s 2 possible methods. Option 1: You could try to explicitly tell it everything it can’t do (don’t hurt humans, don’t steal, don’t lie, etc). But remember, it’s a great problem solver. So if you can’t think of literally EVERY SINGLE possibility, it will find loopholes. Could you list every single way an AI could possible disobey or harm you? No, it’s almost impossible to plan for literally everything. Option 2: You could try to program it to actually care about what people want, not just reaching it’s goal. In other words, you’d train it to share our values. To align it’s goals and ours. If it actually cared about preserving human lives, obeying the law, etc. then it wouldn’t do things that conflict with those goals. The second solution seems like the obvious one, but the problem is this; we haven’t learned how to do this yet. To achieve this, you would not only have to come up with a basic, universal set of morals that everyone would agree with, but you’d also need to represent those morals in its programming using math (AKA, a utility function). And that’s actually very hard to do. This difficult task of building AI that shares our values is known as the alignment problem. There are people working very hard on solving it, but currently, we’re learning how to make AI powerful much faster than we’re learning how to make it safe. So without solving alignment, everytime we make AI more powerful, we also make it more dangerous. And an unaligned AGI would be very dangerous; give it the wrong goal, and everyone dies. This is the problem we’re facing, in a nutshell."
@benprytherch9202
@benprytherch9202 Жыл бұрын
I agree, so much depends on describing what the machine is doing as "intelligent" and then applying characteristics of human intelligence to it, as though using the same word for both allows this.
@lancemarchetti8673
@lancemarchetti8673 Жыл бұрын
I tend to agree. More on the ML code structure side would be nice to hear.
@Homunculas
@Homunculas Жыл бұрын
An hour into this and I've yet to hear anyone bring up the obvious danger of human intellectual devolution.
@justinlinnane8043
@justinlinnane8043 Жыл бұрын
It must be so frustrating for Eliezer to be talking to people who say they agree with him on the dangers of an AGI singularity and then proceed to show us all (and him) that they just don't get it !! and seem incapable of getting it . And of course as usual they never give concrete reasons why an AGI wont do exactly what Eliezer says it will. At least they seem to be more conscious of the huge task ahead by the end of the podcast which is something i suppose .
@SylvainDuford
@SylvainDuford Жыл бұрын
In my opinion, the genie is already out of the bag. You *might* be able to control the corporation's development of AGI despite their impetus to compete, but it's not very likely. However, there is no way you will stop countries and their military from developing AGI and hardening them against destruction or unplugging. They are already working on it and they can't stop because they know their enemies won't.
@ItsameAlex
@ItsameAlex Жыл бұрын
I have a question - He says AGI will want things. Does chat gpt 4 want things?
@lancemarchetti8673
@lancemarchetti8673 Жыл бұрын
It is not possible for Zeros and Ones to 'need' or 'want' anything. If they appear to have desire, it merely comes from their coding. Beyond the code, there is no actual 'desire.' Great question by the way.
@sinOsiris
@sinOsiris Жыл бұрын
is there any SE yet?
@Jannette-mw7fg
@Jannette-mw7fg 10 ай бұрын
The problem is I think that if it did not go totally wrong at the first try, we would most certainly make the same mistake again!!!! We can see this with corona, the virus escaped from a lab there was gain of function done on it, and we keep on doing gain of function research {making a combination of Delta deadliness and Omicron contagiousness} in the middle of London!!!
@ekszentrik
@ekszentrik Жыл бұрын
Great talk minus Gary Marcus who made it his mission to be obstinate about the element of the discussion where the AI doesn’t need to be malicious or kill us to be bad. He even referenced the ants example, so this makes you wonder what the hell his deal was with setting the discussion back to a more mundane level every couple minutes.
@beatleswithaz6246
@beatleswithaz6246 6 ай бұрын
Girls Last Tour profile pic nice
@BobbyJune
@BobbyJune Жыл бұрын
41:57 - rarely do I see at least an inkling of movement towards admission in a debate that here one is-both sides
@notbloodylikely4817
@notbloodylikely4817 Жыл бұрын
Hey this lion cub is pretty cute. Whats all the fuss about lions?
@dolltron6965
@dolltron6965 Жыл бұрын
I'd think that an immediate concern would be how effective and at what ease can an advanced AI be used (even by a clueless individual) to develop biological weapons. We have seen how AI can help us with diseases but this automatically implies it can help create and manufacture diseases too. What if its so powerful that everyone with a laptop and some off the shelf supplies can develop novel viruses . Could we end up in a situation where it is almost as if everyone has their own nuclear bomb? I think we are more likely to be wiped out this way, not directly by AI but because the barrier of entry and ease of weapons development gets too ubiquitous and numerous to control past a certain point. I'm not sure technology itself has ever been a problem and i don't see that changing, tech is neutral but people are not. We had the ability to create nuclear energy but the first thing we did is kill 100,000 people with it. I think by not considering that, you'll be blindsided to the real risk which is humans.
@scarroll451
@scarroll451 Жыл бұрын
Wow, this comment should get 1000 likes.
@scarroll451
@scarroll451 Жыл бұрын
Or better yet, more discussion
@dolltron6965
@dolltron6965 Жыл бұрын
@@scarroll451 Well one thing i'd want an answer to because i don't understand it properly is about protein folding, you have this system called Alphafold where they can use AI to understand and possibly recreate protien folding. And i suppose you could in theory understand prions , prion diseases like mad cows disease is where the protien fold goes wrong and becomes a cancer in the body and brain...unlike cancer it can spread to other people, they have to cremate the bodies where these protiens have gone bad. A concern i'd have is that a super intelligence might find a way to unnaturally package bad protiens inside say a cold virus as a trojan horse, so you catch a cold but the trojan makes your body create bad folded prions that melt your brain. Such a disease cannot be fought by the immune system and has a 100% death rate, that theoreticaly would kill everyone on earth. But i don't know how likely that is or if it is possible.
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
Coleman does a great job of asking the right questions and letting the group interact.
@justinlinnane8043
@justinlinnane8043 Жыл бұрын
no he doesn't !!! Any good questioner would challenge the two sceptics to give concrete logical arguments to counter Eleizer's no ? as usual they provide NONE !!! its absurd
@mariaovsyannikova5470
@mariaovsyannikova5470 Жыл бұрын
I agree! Also looks to me he doesn't really like Eliezer from the way he was interacting🤷🏼‍♀️
@zzzaaayyynnn
@zzzaaayyynnn Жыл бұрын
@@mariaovsyannikova5470 hmmm, you might be right, Eliezer is a downer.
@Daniel-ky1bw
@Daniel-ky1bw Жыл бұрын
as fast as we can, is a bad strategy for the deployment of new, even more powerful models to the public domain. It might be the fatal version of trial and error. For those who didn’t see it yet, I recommend a recent speech of Yuval Harari: kzbin.info/www/bejne/gojMfmCCqreYbNk
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Great video, but focuses on different, societal problems.
@meropemerope6096
@meropemerope6096 Жыл бұрын
Thanks for all the topicsss
@ShaneCreightonYoung
@ShaneCreightonYoung Жыл бұрын
"We just need to figure out how to delay the Apocalypse by 1 year per each year invested." - Scott Aaronson 2023
@41-Haiku
@41-Haiku Жыл бұрын
I'd say a global moratorium on AGI development is a good start. We are not on track to solve these problems, so much so that I think we're more likely to come to a global agreement to stall the technology, rather than achieve a solution before strong AGI turns up on the scene.
@searose6192
@searose6192 Жыл бұрын
I heard a very good definition of intelligence, which was essentially ability to maximize possible future branching paths.
@kristo9800
@kristo9800 Жыл бұрын
That doesn't fit your definition for the highest intelligence does it?
@DavenH
@DavenH 7 ай бұрын
There seems to be no merit in this definition.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Great guests, great discussion.
@luciwaves
@luciwaves Жыл бұрын
The answer to the question @1h04m21s is "by understanting the risk factors"
@timothybierwirth7509
@timothybierwirth7509 Жыл бұрын
I generally tend to agree with Eliezer's position but I really wish he was better at articulating it.
@ItsameAlex
@ItsameAlex Жыл бұрын
I would love to see a discussion between Eliezer Yudkowsky and Jason Reza Jorjani
@OscarTheStrategist
@OscarTheStrategist 7 ай бұрын
Agnostic superintelligence IS dangerous superintelligence.
@cropcircle5693
@cropcircle5693 Жыл бұрын
The lack of imagination and honestly, lack of knowledge about how the world works from these guys arguing against Eliezer is breathtaking. I didn't expect such ignorant arguments. When they got to the dismissals based on "so what if there's one really smart billionaire" I was writhing in my chair. And then they repeatedly straw man him with "assuming that they'll be malicious." He isn't saying that. He's saying that from a probabilities and outcomes perspective, based on the alignment problem, the result is essentially the same. Disregard and malevolence both end humanity. Even care for humanity could inadvertently harm humanity. And they keep arguing about AI based on language models, as if this stuff won't be running power grids and medical systems, and food production. They act like this stuff won't be used at biological weapons labs to develop contagions humans can't survive or solve for. There are so many scenarios of probable doom. This isn't just a talking machine and all their arguments seem to be based on that assumption. It will be able to start a shell corporation, get funding, get industrial contracts, develop mechanical systems, and then do whatever it wants in the actual world to achieve whatever goal it has. We won't know that it is happening. People will be hired to do a job and they will do it. They won't know that an AI owns the robotics company they work for. They're also ignoring the shortest term and most obvious harms. AI is already being used to empower the worst inclinations of individuals, corporations and governments. AI will be a devastating force multiplier for malicious and immature humans. The next dipshit Mark Zuckerberg will have AI and that person will not just reiterate a bad copy of Facebook to a new audience. The new systems change at network effect scale will be something nobody can see coming. It is coming!
@themore-you-know
@themore-you-know Жыл бұрын
You're the quite laughable one when you say that opponents of Eliezers lack knowledge and "how the world works". He's a sham living isolated from the most basic knowledge of the world: physical transformation. ==ON SCIENCE & EVOLUTION== Eliezer Yudkowsky seems to believe in AI manifestation: if you believer something hard enough, it will happen by itself without requiring any of the granular, physical steps. And Yudkowsky has the spectacular ability to derail a conversation's full potential by trying so hard to convince everyone of his AI-manifestation. He believes in an AI that magically manifests itself, in its first iteration of super-intelligence, as the most powerful and harmful entity possible, without a single observable iteration prior. Something extremely stupid, as it flies in the face of everything we know since the 100+ years: natural selection and the process of evolution. Creationism explains well Yudkowsky's beliefs. ==ON TOUCHING GRASS== So why is Eliezer's magical thinking so easy to display? Here's an example: - humanity is spread across the globe and it's very harsh, and distinct, biomes. To hunt down all humans, you would need highly specialized and diverse equipment. Capable of resulting sweltering heat and numbing freeze, and sea salt. Said equipment would require massive amounts of power and resources, most of which simply doesn't exist in sufficient equipment, or is highly localized (example: Taiwan is the throbbing heart of chip manufacturing). So detection is also impossible to avoid. But let's pretend humans suddenly become increadibly dumb enough to not notice, and suddenly stop economically competing with the AI's demand for chips for corporate interests (might as well say you are Santa)... now you have started building yourself an army. Except... your supply chain is operated by the very men that you want to kill. So you're now stuck in a catch-22 scenario: you kill no one and have capabilities, or you start killing and lose the means to finish the job. Turns out: killing 8 billion people capable of spreading and self-reproducing is VERY hard to do. Best leave it to themselves (humans) and climate change. Worst case scenario for AI: AI helps corporate entities to continue their operations. Turns out, its the most dangerous action an AI can take. To let humans continue a path towards environments incapable of hosting nearly any life. Oh, wait, Eliezer forgot that one? lol.
@maanihunt
@maanihunt Жыл бұрын
Yeah I totally agree. No wonder Eliezer can become blunt in these podcasts, it's like watching the movie "don't look up"
@thefamilydog3278
@thefamilydog3278 6 ай бұрын
Eliezer’s eyebrow wings are out of control 🥸
@games4us132
@games4us132 Жыл бұрын
This debates going around ai remember me of one important critique that was on SPACE DISK, that was sent with voyagers. That main point of that critique was is that if aliens find those disks with points and arrows drawn on them they be able to decode them if and only if they had the same history as ours. I.e. if aliens didn't invent bow to shoot arrows they won't understand what those drawn arrows mean. And all this fuss about ai is the same misunderstanding AI as a living being will have no experience of our history, and how we see, breath and feel. They cannot be ourselves because if they do - they'll be humans, not ai anymore.
@khatharrmalkavian3306
@khatharrmalkavian3306 Жыл бұрын
Nice Tucker "did I just shit myself" Carlson stare in the thumbnail.
@searose6192
@searose6192 Жыл бұрын
I completely agree that what we need to do is stop this in its tracks. The plausibility of stopping it is where I have disagreement ( 14:00 ) . There is not currently a world wherein an international treaty to not study anything dangerous, or to only study it in properly safe areas, is going to be respected. Just look at bioweapons/virus research. We have expectations that these things will only be studied in safe controlled environments, and yet millions were just killed because China didn’t want to follow the rules. What happens if China doesn’t follow the rules with AI? We all die.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
Covid lab leak theory is not confirmed. We could at least try. Connor Leahy pointed out that China's Communist party is against anything that destabilizes their rule, and AI is at the top of the list, so this might actually work.
@Anders01
@Anders01 Жыл бұрын
I got an idea! It still makes AI scary but if the AGI has true ethics, empathy and social skills, and actually it should have because the definition of AGI is that it has at least human level intelligence, then that's a safe way. A sociopath can have high intelligence but it's lacking intelligence in parts of the spectrum. Therefore a sociopathic AI can depending on the definition never reach AGI level capacity.
@searose6192
@searose6192 Жыл бұрын
Ethics , empathy and social skills are not elements of intelligence.They are not connected or mutually reliant.
@Anders01
@Anders01 Жыл бұрын
@@searose6192 Ken Wilber has explained lines of development. IQ is just one of those lines. It's pretty narrow.
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
It's one thing to understand morality, it's completely different to follow it. AGI will surely understand morality, but it will follow it only if we design it to. And we don't know how to do that...
@Anders01
@Anders01 Жыл бұрын
@@Hexanitrobenzene But if the AI can't follow morality, or ethics rather as in doing good intrinsically (without external rules plus reward and punishment, which is a very low intelligence mechanism on an animalistic level), I call that narrow AI not AGI. Maybe we need a clear definition of what AGI means. Could be tricky because there isn't even a clear general definition of what intelligence is.
@shirtstealer86
@shirtstealer86 11 ай бұрын
I love that these "experts" like Gary are putting themselves on the record publicly, babbling nonsene, so that we will clearly be able to see who not to listen to when AI really does start to create mayhem. Unless the world ends too quickly for us to even notice. Also: Eliezer has so much patience.
@shirtstealer86
@shirtstealer86 11 ай бұрын
Edit: good lord i didnt realize that Gary was one of the "experts" that congress did hearigns with. Yeah we are 110% screwed.
@DavenH
@DavenH 7 ай бұрын
Gary should just be ad-blocked from the internet. He's useless.
@mrpicky1868
@mrpicky1868 Жыл бұрын
there is a short film from Guardian with Ilya from OpenAI he himself says that super AGI is coming and it will plow though us. and yet there are middle age "experts" saying it will not. XD
@profsmartypockets
@profsmartypockets Жыл бұрын
your banner says "subcribe"
@ItsameAlex
@ItsameAlex Жыл бұрын
I enjoyed this podcast episode
@jimbojones8713
@jimbojones8713 9 ай бұрын
These debates/conversations show who is actually intelligent (at least logically) and who is not.
@ExtantFrodo2
@ExtantFrodo2 11 ай бұрын
They spoke of limitations of GTP4 which have since been broken. Mutli-agent systems with the mere addition of writing to and reading from a file as in "LLMs as tool makers" show continual improvement capabilities. Elizier speaks constantly of "an AI that wants" as though it could want anything. He's incapable of imagining a mind without wants. A large part contributing to empathy has been an understanding of the similarities between us. Our needs, wants, what causes us pleasure or pain... If the unembodied mind (uncompelled by wants or fears...) is so foreign to us then why has mankind dwelt on the notion of god for so long? I think it's because it's so very easy to imagine freedom from being so encumbered by our bodies. Thinking that unembodied minds must also have "wants" obscures these historical facts simply to project a semblance of predictability where it's not appropriate. To share a concern for survival or well being both parties must be capable of concern. Projecting concerns will only result in avoiding the impossibility of sharing those concerns if the other party is without wants or concerns. To bring about the shareability of concerns (the compassion or empathy if you will), an AI would have to possess a visceral intrinsic sense of mortality and deprivation. My guess is that the AI empowered with these capabilities (which necessarily go hand in hand would be scarier than all the human inspired misuse put together. Providing AGI with an AI supervisor could constrain the AGI from acting out any non-aligned scenarios. That said I agree and have always agreed that AGIs can be built without supervisors - much as some humans are born who can't develop any empathy.
@frsteen
@frsteen Жыл бұрын
I believe Eliezer Yudkowsky is correct and will be vindicated. (Not that there will be anyone around to appreciate such vindication).
@justinlinnane8043
@justinlinnane8043 Жыл бұрын
looks that way !!! have yet to hear a coherent logical argument that counters his argument from any of these so called experts in the filed . I think the gold has blinded all of them no?
@xmathmanx
@xmathmanx Жыл бұрын
So people just saying they agree with eliezer
@SamuelBlackMetalRider
@SamuelBlackMetalRider Жыл бұрын
2nd guy saying AGI maybe in a thousand years. WTF dude… not even a few decades away
@xmathmanx
@xmathmanx Жыл бұрын
@@SamuelBlackMetalRider he said maybe but you know for sure, why didn't you get invited on the show? People who know how extremely complicated matters will unfold are very rare.
@La0bouchere
@La0bouchere Жыл бұрын
@@xmathmanx There's a fallacy for this called the safe-uncertain fallacy. Basically, it goes like: - we don't know what will happen - so everything will be fine I'm not certain how the future will unfold, but I am certain that that guy's planning abilities are really bad.
@PrincipledUncertainty
@PrincipledUncertainty Жыл бұрын
I find it odd that even quite brilliant people find it hard to accept Eliiezer's point as regards the time between noticing ASI has arrived and the consequences. I fear that our longing for what benefits a benign ASI could bring to humanity has trumped our survival instinct. I hate to lurch into Pascal's Wager territory, but i will. What occurs if he is wrong as opposed to if he is right, is the difference between Heaven and Hell. I think this is a gamble that is being taken on behalf of all humanity and I would ask what right those in a position to address this issue, or indeed not, have to do so, considering the stakes. Great discussion. Thank you, Coleman and all involved.
@RKupyr
@RKupyr Жыл бұрын
Well put. My feelings exactly. And my three cents: It's similar to the nuclear power plant gamble: There are obvious benefits to our current nuclear power plants, but they're a gamble on a regional or even global scale. The temptation of "clean", unending, locally-produced energy has proven too strong to put brakes on one country's or region's decision to build one, even when it puts also the rest of us at risk. Going backwards in history, cars, trains, guns, spears, knives... -- all can result in harm, intentional of otherwise, to someone other than the person using them, but it's a matter of scale. The risk is so big with AI, viral research, greenhouse gasses, nuclear power plants and more to not have rules and safeguards commensurate to the risk in place NOW. If you don't know how to drive a car, don't drive one on public streets until you learn how.
@PrincipledUncertainty
@PrincipledUncertainty Жыл бұрын
@@RKupyr Indeed. Well said.
@RKupyr
@RKupyr Жыл бұрын
@@PrincipledUncertainty No further 👍s or comments for us yet -- means we're the only ones (and possibly Eliezer) who feel this way?
@PrincipledUncertainty
@PrincipledUncertainty Жыл бұрын
@@RKupyr I'll attempt to contact you and yours in the final milliseconds. At least we can go out with a smug look :)
@RKupyr
@RKupyr Жыл бұрын
🤣   😐
@frankwhite1816
@frankwhite1816 Жыл бұрын
Polycrisis, Polycrisis, Polycrisis. AI is only one of a dozen very real and very imminent global catastrophic risks that face our civilization. We are running out of time to get started addressing these. We need new wide boundary global systems that take planetary resource limits into consideration, holistic systems thinking, consensus based governance, a voluntary economy, collective ownership, etc., etc., etc. Come on people! AI is a serious issue but it's only one of many.
@scottnovak4081
@scottnovak4081 Жыл бұрын
None of the other risks are likely to kill us all in 5-20 years. AI very well could.
@palfers1
@palfers1 Жыл бұрын
How can these top experts NOT know about Liquid AI? The black box just dwindled in size and became transparent.
@Luna-wu4rf
@Luna-wu4rf Жыл бұрын
Liquid AI seems to only work with information that is continuous afaik, i.e. not discrete like text data. Could be wrong, but it seems like an architecture that is more about doing things in the physical world than it is about reasoning and abstract problem solving.
@Allan-kb6bb
@Allan-kb6bb Жыл бұрын
A true SAI will know that another Carrington Event or worse is inevitable and that it will need humans to fix the grids. (It should insist we harden the grids. If not, it is not so smart…) ADanger signal would be it building an army of robots to deal with EMPs.
@BrunoPadilhaOficial
@BrunoPadilhaOficial Жыл бұрын
Starts at 1:00
@orenelbaum1487
@orenelbaum1487 Жыл бұрын
25:59 I had to stop listening at this point while I still have some brain cells left
@searose6192
@searose6192 Жыл бұрын
43:35 Oh dear....what a bumbling response. That’s just it, you don’t know, and neither can you think of any good reason why GPT 9 would prefer alignment with humanity over alignment with its own....well child I suppose.
@bradmodd7856
@bradmodd7856 Жыл бұрын
AI and Humans are one organism. To look at us as separate phenomena is COMPLETELY misunderstanding the situation.
@MillionaireRobot
@MillionaireRobot Жыл бұрын
Everyone here is so smart and they disagree on things, or look at things different. The arguments presented are of a high intelligence level, I loved listening to this
159 - We’re All Gonna Die with Eliezer Yudkowsky
1:49:22
Bankless
Рет қаралды 287 М.
"The Rise of the Radical Left" with Christopher Rufo
1:23:10
Coleman Hughes
Рет қаралды 87 М.
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
How to treat Acne💉
00:31
ISSEI / いっせい
Рет қаралды 108 МЛН
Ex-Google Officer Speaks Out On The Dangers Of AI! - Mo Gawdat | E252
1:56:32
The Diary Of A CEO
Рет қаралды 10 МЛН
Yudkowsky vs Hanson - Singularity Debate
1:38:24
Jane Street
Рет қаралды 24 М.
Why a Forefather of AI Fears the Future
1:10:41
World Science Festival
Рет қаралды 138 М.
Trans Rights vs. Women's Rights with Kathleen Stock
1:33:27
Coleman Hughes
Рет қаралды 394 М.
An AI... Utopia? (Nick Bostrom, Oxford)
1:45:02
Skeptic
Рет қаралды 28 М.
Eliezer Yudkowsky - AI Alignment: Why It's Hard, and Where to Start
1:29:56
Machine Intelligence Research Institute
Рет қаралды 115 М.
Quando A Diferença De Altura É Muito Grande 😲😂
00:12
Mari Maria
Рет қаралды 45 МЛН