Roman Yampolskiy on Objections to AI Safety

  Рет қаралды 6,744

Future of Life Institute

Future of Life Institute

Күн бұрын

Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at cecs.louisville.edu/ry/
Timestamps:
00:00 Objections to AI safety
15:06 Will robots make AI risks salient?
27:51 Was early AI safety research useful?
37:28 Impossibility results for AI
47:25 How much risk should we accept?
1:01:21 Exponential or S-curve?
1:12:27 Will AI accidents increase?
1:23:56 Will we know who was right about AI?
1:33:33 Difference between AI output and AI model
Social Media Links:
➡️ WEBSITE: futureoflife.org
➡️ TWITTER: / flixrisk
➡️ INSTAGRAM: / futureoflifeinstitute
➡️ META: / futureoflifeinstitute
➡️ LINKEDIN: / future-of-life-institute

Пікірлер: 59
@danaut3936
@danaut3936 Жыл бұрын
Wonderful talk. Basically he's saying the same as Eliezer Yudkowsky, Connor Leahy et al. but in a very unagitated manner. It's hard to imagine anyone not taking x-risk seriously after watching this. Highly appreciated
@FunnyArcade
@FunnyArcade Жыл бұрын
Wonderfully calm and blunt haha. It's almost difficult having an appropriate emotional response listening to him. I slipped right into absurdism. However, the situation truly is absurd, we live in an absurd timeline. Certainly a must watch interview
@boldCactuslad
@boldCactuslad Жыл бұрын
In defense of Yudkowsky, he has been writing and talking about this for 15 years. It must be incredibly irratating to hear people ask the same basic questions and have the same awful misunderstandings, strawmans after all his work. It is impossible to have a reasonable understanding of the topic without agreeing with most of what the doomsayers believe, as they happen to have the most experience and the largest bodies of work.
@michaelsbeverly
@michaelsbeverly Жыл бұрын
@@boldCactuslad Yudkowsky nearly started crying on Lex Fridman's podcast. And, Lex, who seems to be pretty smart, pretty much dismissed him. When Lex interviewed Mark Zuckerberg recently (episode released yesterday) he asked about Yudkowsky's concerns and Zuckerberg ignored the question and went on to say how great AI will be (curing cancer, global climate change, poverty, whatever). Listening to Eliezar talk to people is frustrating -- he went on a podcast in which he told the interviewer not to research him at all or study, he wanted to see how a person would react and question him when the person was uninformed. It was a total shit-show and pretty much a complete waste of time. After a couple hours, the guy interviewed put in the comments that he was still confused about whether Eliezar was trying to convince him that AI would be a sentient being, which only proved, at least to me, that the guy didn't understand 80% of what he was just told. Maybe 90%. It's a frustrating problem. When I first heard of EY I thought was kind of nuts, like a religious zealot. "We're all gonna die!" Yeah, sure.....that's what cults have been saying for years. But, to give myself some credit here, I listened to several hours of interviews, then bought his book Rationality AI to Zombies (I'm barely 1/3 through) and now I see, no, he's the rational sane person in the room. Everyone else is a freaking idiot. Humanities problem here stems from that fact that most humans are not only stupid, they're too stupid to realize they're stupid, so they just run out their programs and it takes an insanely strong catalyst to move one's mind for believing X to believing Y. So, yeah, we're doomed in my humble opinion. I don't see any way around the impending doom, however, since I'm at least smart enough to realize I could be totally wrong, I'm not going to check out, I'll stick around with popcorn and try to enjoy the show.
@relaxandfocus5563
@relaxandfocus5563 Жыл бұрын
@@michaelsbeverly Or try to resist the stupidity and be part of the very much needed resistance. That's a moral obligation for anyone who grasps the severity of the situation.
@michaelsbeverly
@michaelsbeverly Жыл бұрын
@@relaxandfocus5563 I've thought the same. However, not being a billionaire with the money to raise a private army, I feel pretty hopeless about being able to do anything. One can hope maybe someone with power and influence will get it and do something....but I don't hold much hope of that. I suspect guys like Elon Musk are telling themselves that the only solution is to be first to AGI with the idea that they'll be the "good guys" and they'll fix whatever they have to fix to stop the "bad guys." maybe that'll work...maybe not.
@charleshultquist9233
@charleshultquist9233 Жыл бұрын
Extremely well articulated direct answers to insightful questions. I don't know if they edited out all the "ums" and "ehs" but this vid has a very efficient fast tempo.
@olemew
@olemew Ай бұрын
Both. Roman is very well articulated, plus you can tell there are cuts to make it better. In fact... there are AI tools that help you automate that task.
@lydiab6063
@lydiab6063 Жыл бұрын
Thank you. I appreciate this conversation.
@akow2655
@akow2655 Жыл бұрын
Thanks for the upload homies
@goodlookinouthomie1757
@goodlookinouthomie1757 Жыл бұрын
Somebody help me 😳, I'm binging on AI doom porn 😬
@GGoAwayy
@GGoAwayy Жыл бұрын
I wish I could work for FLI
@jackielikesgme9228
@jackielikesgme9228 Жыл бұрын
Thank you for bringing up the burden of proof! This has been bothering me throughout everything I’ve listened to and read about re-AGI risk. Proving that it’s going to kill us all = killing us all, and it drives me crazy that not having proof or even enough reason to be concerned is used by real *expert* people in debates arguing for development w/out regulation like this is some kind of hypothetical fun thought game
@Low_commotion
@Low_commotion Жыл бұрын
If it take a century, we won't be around to see it. I hope we achieve longevity escape velocity, but I'm bearish on achieving it in the next half-century at our present pace. The iteration cycle of medical technology just goes far slower than that of software.
@blahblahsaurus2458
@blahblahsaurus2458 11 ай бұрын
43:58 "1% of humans survive is a very different problem from 100% go extinct." How do you figure that?! If I'm in the 99% who don't survive, it's a pretty similar situation to total extinction from my perspective. Almost identical, in fact. Actually, if the 1% who survived are the people who killed the rest of us, I'm not sure I want them to survive. Might as well be replaced by a species that doesn't kill everyone else on the planet out of greed and fear
@DavenH
@DavenH 10 ай бұрын
Regardless of how you die, old age or not, it'll be pretty similar to total extinction, from "your" perspective. It's the perspective of the survivors that counts. 1% is still a lot of human survivors. 80 million. Just about right... not anywhere close to extinction.
@blahblahsaurus2458
@blahblahsaurus2458 8 ай бұрын
@@DavenH "regardless of how you die". What you've written, by the most literal interpretation, says that whatever way we or people we care about die, and - perhaps - however much we suffer while we are alive, it's all the same from an ethical standpoint. To which I say... YOU DID IT! You have articulated a theory of ethics in which all possible consequences and decisions are more or less moot, equivalent, and interchangeable. And as far as I can tell, it is completely logically valid and consistent. Neato. Cool cool cool.
@olemew
@olemew Ай бұрын
I think it's meant at a "societal" or "humankind" level, not at the individual level. I mean, a war can kill 10 million people or just 1 soldier. If you're the 1 unlucky soldier, it may be the same for you, but not at the societal level. Anyway, when I get somebody to concede "OK maybe many people die but not 100%", that's a win. Because chances are they are passionate about climate change, anti-wars, famines... and they're passionate without thinking that it will for sure kill 100% of humanity. So why is the bar so high for AI safety? Tomorrow they'll go back to their old ways, of course, more debate is necessary, but it's a great start if you can get them to picture the death of millions of innocent people who never agreed to this risky experiment.
@billdavis687
@billdavis687 Жыл бұрын
People are not talking about AI competition or the competition of what’s on the web . Once it learns about competition it will go after other AI s then it will go after us
@kyneticist
@kyneticist Жыл бұрын
It's surprisingly difficult to have a cogent conversation with people about the basics. I honestly doubt that we'll manage to get to a point where we can as societies, consider competitive AI before they engage in competitive behaviours.
@cacogenicist
@cacogenicist Жыл бұрын
Agents make useful tools. We call the _employees._ I sometimes encounter the viewpoint that sort of raw intelligence isn't dangerous or powerful without greater-than-human _knowledge_ of how the universe works. That is, AGIs would be constrained by slow scientific processes of experimentation and so forth. I think of a chimpanzee bush baby spear -- some chimp cultures make spears they use for stabbing bush babies, _Galagos,_ which are little nocturnal primates that sleep in tree hollows during the day. I imagine a chimp AI doomer who tells every chimp that will listen that greater-than-chimp-intelligence AIs could create vastly superior bush baby spears, wiping out all bush babies so that there are no more for chimps to eat. The doomer is met with skepticism -- surely the AI would have to make small changes to the spears, then go around stabbing bush babies and noting any improvements. AI could not produce superior bush baby spears in a short period of time because stabbing bush babies is the only way to determine how well a bush baby spear works. .... but then a 9-year-old human child walks by, takes a look at the state-of-the-art chimpanzee-produced bush baby spear, immediately understands that it's shit, and why, and whips out a pocket knife and makes it 1,000% better.
@41-Haiku
@41-Haiku Жыл бұрын
Excellent analogy. ...As someone who's nickname was at one point Bushbaby/Galago, I am appropriately disturbed.
@olemew
@olemew Ай бұрын
The whole "AI is constrained by the intelligence of their human creators" does not make any sense. You only need to spare a few seconds and think of some super-human AI achievements to understand that you are wrong in an extremely important topic.
@paigefoster8396
@paigefoster8396 Жыл бұрын
I choose humans.
@vallab19
@vallab19 Жыл бұрын
RY's argument on AI safety sounds fascinating, but I could not make out what exactly his objections are. If you cannot achieve 100% AI safety does not mean existential risk. Strangely enough, the entire hypothesis of existential risk stands on the foundation of "mortality" of biological humans which will make no sense for the AI (machine) integrated humans. Now the ultimate question is does the AI inbuild humans (you may call them Transhumans) can be qualified or excepted as humans? Just like any other human differences? FINALLY, HUMANS WHO REFUSE TO EMBRACE THE AI WILL DEFINITELY FACE THE EXISTENTIAL RISK FROM HUMANS WHO EMBRACE AI.
@SamuelBlackMetalRider
@SamuelBlackMetalRider Жыл бұрын
The « war » between those who join AI and those who refused had been foretold years ago already by Hugo de Garis
@vallab19
@vallab19 Жыл бұрын
@@SamuelBlackMetalRider Unfortunately people only take it as a science ficion.
@ShangaelThunda222
@ShangaelThunda222 Жыл бұрын
The lines are being drawn as we speak and most humans still don't see it. But they will. Soon enough. They all will.
@peteraddison4371
@peteraddison4371 Жыл бұрын
... yes. Correctly summerised and stated ...
@olemew
@olemew Ай бұрын
"If you cannot achieve 100% AI safety does not mean existential risk" -- of course it does! Simplifying, 99% safety implies 1% existential risk. A coin with 1% heads is great for betting if losing means "losing some money" and you only throw it a couple times. Now change "some money" with "your life" and a couple times with "many times every day". Would you still use that coin?
@kyneticist
@kyneticist Жыл бұрын
I don't understand whether you're trying to "play devil's advocate" as a foil to the people that you're interviewing, or if you're not listening to them or just don't understand what they're saying. I'm trying to give you the benefit of doubt and I very much want to listen to the rest of what Roman has to say, but your obtuse questions make it very difficult.
@cacogenicist
@cacogenicist Жыл бұрын
? He's presenting the "objections to AI safety" so that Yampolskiy can respond to them, and trying to present them in a fair, non-straw-man way.
@kyneticist
@kyneticist Жыл бұрын
@@cacogenicist Sure... but he's just repeating the same questions over and over and belabouring points in a way that's seriously frustrating to listen to.
@cacogenicist
@cacogenicist Жыл бұрын
@@kyneticist - Do you mean within the same interview, or across different interviews? If the former, that's just not the case. I'm guessing you didn't actually watch the whole interview. Perhaps you have a very low tolerance for being exposed to points of view you disagree with?
@chrishudson9525
@chrishudson9525 Жыл бұрын
@@cacogenicist The section where Roman Yampolskiy talks about the importance of AI needing to be 100% aligned, and anything less is unacceptable, the interviewer repeats the same question several times, making very little alteration, despite Roman's answer not being appreciably different, or requiring expounding upon. Roman Yampolskiy's position and why he takes that position, is very clear within his first answer of the initial question. So I totally get why someone would question whether or not the interviewer is playing devil's advocate, or if indeed he is just being resistant to the answers that he is being given, from time to time. You would have noted this, had you actually watched the whole interview.
@Low_commotion
@Low_commotion Жыл бұрын
Honestly, I found his questions kinda softball. Actual accelerationists have different objections than this.
@ElieSanhDucos0
@ElieSanhDucos0 Жыл бұрын
Still flawed to me. For example the old virus on the floppy disk example : Who uses Floppy disks and Who uses systems still vulnerable ? Those discussions are always happening in a etheral world where software is not linked to hardware. AI is totally dependant on human care giving. All its hardware. For a AI to rationally harm humans it would need to do the rational logical jump that harming them is not risking harming itself. And AI CANNOT maintain itself without human intervention. So the more credible scenarios are the one where humans would be hired by the AI against other humans. but by then its not AGI it has flaws of any system where humans are a key part of.
@michaelsbeverly
@michaelsbeverly Жыл бұрын
AI is only dependent on humans until it's not. You claim to have proof that you know the moment it won't be dependent? If no, your argument is a non-sequitor. If yes, publish. You'll be more famous than the Beatles, Jesus Christ, and Britney Spears.
@relaxandfocus5563
@relaxandfocus5563 Жыл бұрын
Ah, I guess robots are impossible to create, or hijack. Yes, AI will forever need humans because we're... uh, idk really good and special and so irreplaceable.
@ShangaelThunda222
@ShangaelThunda222 Жыл бұрын
​@@relaxandfocus5563 😂 Exactly. We shouldn't just think about today. We should think about six months from now. Year from now. Two years from now. Etc. The further into the future we go, the less necessary humans are for AI and the more necessary AI becomes for humans. That's kind of the entire point LOL. Even the utopians would agree with that. Because that's literally the world they're aiming for.
@olemew
@olemew Ай бұрын
"AI CANNOT maintain itself without human intervention" That only speaks to your lack of thoughtfulness. Limited humans were able to create the Curiosity rover, Perseverance rover, and Tianwen-1. Spare a few minutes to think of all the ways AI robots could live and maintain themselves in this very Earth (i.e., much simpler than landing on Mars).
@clarkd1955
@clarkd1955 11 ай бұрын
Do you have examples of LLM that has access to it’s own training data set and a huge account to train it? Please tell me what current AI can create a more capable version of itself? People like me (non believers of the imaginary god of super AI) don’t need to prove anything. “extraordinary claims require extraordinary proof”. Show proof that “super AI” is possible other than just assuming it. Is Wikipedia a threat? If not then what about a Wikipedia with 100 times more data? No threat there either. Why would models that are bigger than now be any threat? Isn’t the threat (normally referred to as misalignment) only about agents? Current LLM’s don’t have agents so Wikipedia would be a good example of the current threat from LLM even if got substantially bigger. Making fun of people that have seen no evidence to prove current (not imaginary) AI is no threat, is very counter productive. If you actually have proof of your hypothesis, then show it!
Dan Hendrycks on Why Evolution Favors AIs over Humans
2:26:38
Future of Life Institute
Рет қаралды 6 М.
Connor Leahy on AI Progress, Chimps, Memes, and Markets
1:04:11
Future of Life Institute
Рет қаралды 7 М.
Was ist im Eis versteckt? 🧊 Coole Winter-Gadgets von Amazon
00:37
SMOL German
Рет қаралды 34 МЛН
Happy 4th of July 😂
00:12
Pink Shirt Girl
Рет қаралды 30 МЛН
Survival skills: A great idea with duct tape #survival #lifehacks #camping
00:27
MEGA BOXES ARE BACK!!!
08:53
Brawl Stars
Рет қаралды 35 МЛН
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
1:31:14
Connor Leahy on AGI and Cognitive Emulation
1:36:35
Future of Life Institute
Рет қаралды 21 М.
The Extent of the Problem They Don't Let You See | Tommy Robinson
10:18
Jordan B Peterson
Рет қаралды 967 М.
How might AI be weaponized? | Al, Social Media and Nukes at SXSW 2024
57:53
Future of Life Institute
Рет қаралды 1,7 М.
Stuart Russell, "AI: What If We Succeed?" April 25, 2024
1:29:57
Neubauer Collegium
Рет қаралды 17 М.
AI Safety Summit Talks with Yoshua Bengio
1:37:48
Existential Risk Observatory
Рет қаралды 4,8 М.
Is AI Safety a Pascal's Mugging?
13:41
Robert Miles AI Safety
Рет қаралды 371 М.
После ввода кода - протирайте панель
0:18
Up Your Brains
Рет қаралды 1,1 МЛН
ИГРОВОВЫЙ НОУТ ASUS ЗА 57 тысяч
25:33
Ремонтяш
Рет қаралды 341 М.
Самый дорогой кабель Apple
0:37
Romancev768
Рет қаралды 291 М.
ПОКУПКА ТЕЛЕФОНА С АВИТО?🤭
1:00
Корнеич
Рет қаралды 3,6 МЛН