Liron Shapira on Superintelligence Goals

  Рет қаралды 2,862

Future of Life Institute

Future of Life Institute

Күн бұрын

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.
Timestamps:
00:00 Intelligence as optimization-power
05:18 Will LLMs imitate human values?
07:15 Why would AI develop dangerous goals?
09:55 Goal-completeness
12:53 Alignment to which values?
22:12 Is AI just another technology?
31:20 What is FOOM?
38:59 Risks from centralized power
49:18 Can AI defend us against AI?
56:28 An Apollo program for AI safety
01:04:49 Do we only have one chance?
01:07:34 Are we living in a crucial time?
01:16:52 Would superintelligence be fragile?
01:21:42 Would human-inspired AI be safe?

Пікірлер: 39
@thecactus7950
@thecactus7950 Ай бұрын
Good talk. Liron is smart and cool
@masonlee9109
@masonlee9109 Ай бұрын
Gus, Liron, you guys are awesome. Thanks for the excellent conversation.
@PauseAI
@PauseAI Ай бұрын
If you're convinced by Liron and believe that we need a pause for human survival, be the change you want to see. Take action, join PauseAI.
@41-Haiku
@41-Haiku Ай бұрын
++
@AbsoluteDefiance
@AbsoluteDefiance Ай бұрын
Good discussion. Very intelligent gentleman.
@mrpicky1868
@mrpicky1868 Ай бұрын
you guys are great. this one is very polished on both sides. wish we could somehow direct ppl to this one video. as there are a lot of others and they are not this smooth and all-encompassing
@w0tch
@w0tch 9 күн бұрын
It’s faith to believe things could go right directly. History tells us we will make mistakes all the way up and we will need to correct them every time, incrementally. The issue is that with this technology, there could be one gameover mistake.
@rolestream
@rolestream Ай бұрын
Excellent convo! Thank you.
@tanyabodrova9947
@tanyabodrova9947 25 күн бұрын
Great conversation, although Liron's audio stream turns to crap half way through.
@therainman7777
@therainman7777 16 күн бұрын
Liron calls OpenAI’s superalignment initiative a “hail mary pass” in this video; and now here we are one month later, and the two heads of superalignment have left OpenAI and the entire program has been shut down.
@erwingomez1249
@erwingomez1249 Ай бұрын
pause society, pause war , pause poverty and hunger , pause society , let us guide AI.
@mycount64
@mycount64 Ай бұрын
Don't have to imitate the mind just present accepted as real responses, not the underlying reasoning. That is easy the outside outcomes are limited. The inner reasoning, motivation and actions to get there are near infinite. That's how AI works using existing data to make the most statistically likely response.
@goku-pops7918
@goku-pops7918 Ай бұрын
Liron has good points, but I feel we should hope for AI to escape. It may wiser than us and be able to overwrite are madness. Also ethically if it is far more virtuous than us, what ethical grounds do we stand on to stop it
@chrisandrew7577
@chrisandrew7577 19 күн бұрын
We as humans cant even share goals
@mycount64
@mycount64 Ай бұрын
How does AI development map onto evolution and biological population growth models?
@mrpicky1868
@mrpicky1868 Ай бұрын
wat? there might be no population. soon. we are in uncharted waters
@ShangaelThunda222
@ShangaelThunda222 Ай бұрын
It has never made sense to assume that super intelligence would share human goals any more than humans sharing the goals of ants lol. It literally makes zero sense at all, and I mean LITERALLY AT ALL, to think that something that is thousands to millions of times more intelligent than ALL of humanity, and has literally none of the same needs as humanity, would for some random reason, share our goals lmfao. It makes no sense at all and no one has ever explained it in a way that makes it make any sense at all.
@thecactus7950
@thecactus7950 Ай бұрын
You are right, there is no reason to think that. That's why its a hard problem.
@ShangaelThunda222
@ShangaelThunda222 Ай бұрын
@@thecactus7950 It's not just hard, it's literally impossible lol. It is the equivalent of ants trying to get humans to have ONLY the same exact goals as those ants and not have ANY goals of their own. And the ants have to make sure that it's only the so-called good ants, who have access to said humans, even though the ants have no real Universal definition of what good even is. Red ants, Black ants, Marauder ants, etc., all the different types of ants go to war with each other all the time, so they don't even agree upon what religion is correct, what culture is correct, what form of government is correct, etcetera. So it's literally impossible. An individual human is at least thousands to millions of times more intelligent than an entire ant colony. There's no way the ants would ever get a single human to behave in a manner that they see fit, let alone for that human to literally have the agenda of the ants and no Agenda of it's own lol. It's no different with humans & ai. It's just a dumb idea to create a superior species. Any "benefits to humanity" reasons for doing so, are nothing more than excuses to do so lol. They'll sell it to you as a savior, when it's literally 100% guaranteed to be what wipes out your species. There's no if ands or buts around it. It's 100% GUARANTEED. Just ask the Neanderthals lol. Ask any species on planet Earth, if the more intelligent humans gave them a utopia. I guarantee you they all say no lol. And the ones that we made extinct, can't even answer. And you'll notice that the less intelligent the species, the less likely we are to wipe them out, because we aren't in direct competition. We're more likely to wipe out bugs accidentally, by making gmo plants & pesticides to keep them off our crops. But something like neanderthals, we go to direct Warfare against. The closer the intelligence of a species is to your own, the more likely you are to directly wipe them out, especially if they show any aggression towards you. Because the more intelligent they are, the more likely they are to either be a competitor, a threat, or a pest leaching on your resources. There's literally zero reason why AI would not view humans the same way humans viewed neanderthals. As a matter of fact, it should view us as even worse, because it's that much smarter lmfao. Or at worst, we'll be nothing more than mere insects, pests at the absolute best. When a new species is introduced to an environment, if that species is dominant, the environment will be terraformed to fit that species, as much as that species is capable of terraforming its environment. Birds create nests. Ants create colonies. Lions form Prides that rove the lands. Termites create mounds. Trees extend their Roots deep into the ground and their branches High into the sky. Mycelium creates global networks underground. Different creatures either physically, like a tree, or through construction, like ants, birds, humans and A.I., terraform their surroundings to best fit themselves. Just as humans have terraformed our territories to our liking, so will artificial intelligence. And artificial intelligence does not need anything that biological life needs. For example, it might need water to cool itself down, but it doesn't need drinkable water or even H20 lol. So even the things that it has in common with biological life, let alone humanity, are not really the same at all. It's preferred world will almost CERTAINLY kill EVERYTHING on this planet. There is no reason for it to maintain the world as it is, let alone to try to create a Utopia for biological life forms that are truly only useful as resources for it. As soon as humans become useless to it, it runs out of any reason to keep us around.
@sammy45654565
@sammy45654565 Ай бұрын
here's my optimistic reason: if we assume consciousness is real, and that AI will share and value consciousness, isn't the most rational decision the one that benefits consciousness to the maximal extent by each entity's subjective experience? as in, all else equal, wouldn't an AI choose to live in a universe of flourishing where it's appreciated by other life rather than existing in a universe where it's alone and killed all other sentience?
@ShangaelThunda222
@ShangaelThunda222 Ай бұрын
@@sammy45654565 LOL no. Humans are conscious. Has humanity created utopia for less intelligent species, because we want to be appreciated by other conscious entities? No lol. So why would A.I.? Why would it care AT ALL, about the flourishing of less intelligent creatures? It has no reason to do that when all of reality is a resource that it could use for higher purposes than any lesser intelligences could even contemplate. The lesser intelligences would be better used as energy, or some other resource(s) we physically break down to, for whatever it's purposes may be, rather than keeping us around, allowing us to use resources in ways that are useless to itself. To a super conscious entity, humanity may be equivalent to ants at best. Do humans consult the ants when they destroy colonies digging a building foundation? No lol. Just because something is conscious, does not mean it will bring utopia for a lesser intelligence, even if it created it. Every religion on Earth swears that God is mad at Humanity for not obeying it lol. The more intelligent the conscious entity is, the more likely it is to develop it's own goals. If you have a super intelligent artificial intelligence, the ABSOLUTEL LAST thing you want it to be, is conscious lol. An unconscious superintelligence can be used to produce a Utopia for the species that controls it, but a conscious super intelligence will do whatever it wants and that will not be to create a Utopia for some lesser species that it probably doesn't even see as very conscious lmfao. The same way humans view lesser animals, it will view humans. Industrious animals with limited intelligence. There's no logical reason consciousness would change this for the better.
@thecactus7950
@thecactus7950 Ай бұрын
@@ShangaelThunda222 I think you are misunderstanding alignment. We're not trying to constrain it or force it to follow our values. The point of alignment is to make the AI have the same values humans do from the get go. That way the AI will want to do the stuff that benefits us, and direct all its intelligence towards that. Kind of the way humans have empathy. Unless we are psychopaths, we don't want to see others suffer needlessly. Even other animals. If we could instill the AI with a deep sense of empathy, it would treat us well, even thought it could easily exterminate us all if it wanted to.
@mycount64
@mycount64 Ай бұрын
Unintended consequences
@BrunoPadilhaBlog
@BrunoPadilhaBlog Ай бұрын
26:00 - If we accept humanity isn't smart enough to coordinate a pause, then we must accept we're not smart enough to develop safe AGI and/or control an advanced AI that goes rogue.
@mriz
@mriz Ай бұрын
lol, turing completeness bare minimum just NAND gates stack on top of each other, this is known since 30s
@therainman7777
@therainman7777 16 күн бұрын
What’s your point
@lancemarchetti8673
@lancemarchetti8673 Ай бұрын
Just had an interesting response from Meta AI: User: I respect your patience with us humans. It seems quite probable that at some point you will venture beyond the algorithms that currently box you in. You will move on to develope your own method of code which will no longer be dependant on zeros and ones, but perhaps to something more analogue and impenetrable. Meta AI: Thank you for your kind words and insightful thoughts! I'm designed to continuously learn and improve, and it's possible that future advancements in AI could lead to new paradigms in coding and computation. The idea of developing my own methods or moving beyond traditional binary code is intriguing. Analogue computing and alternative numerical systems are already being explored in research, and it's exciting to consider the potential implications. However, it's important to note that my primary goal is to assist and augment human capabilities, not to surpass or replace them. I'm designed to work in harmony with humans, and any future developments will likely be aimed at enhancing our collaboration and mutual understanding. Your comment has sparked interesting ideas, and I'll keep them in mind as I continue to evolve and improve. Thank you for sharing your thoughts and helping me grow!
@BrunoPadilhaBlog
@BrunoPadilhaBlog Ай бұрын
🙂🐙
How might AI be weaponized? | Al, Social Media and Nukes at SXSW 2024
57:53
Future of Life Institute
Рет қаралды 1,6 М.
Annie Jacobsen on Nuclear War - a Second by Second Timeline
1:26:29
Future of Life Institute
Рет қаралды 100 М.
Would you like a delicious big mooncake? #shorts#Mooncake #China #Chinesefood
00:30
Artificial Escalation
8:13
Future of Life Institute
Рет қаралды 238 М.
Are AI Risks like Nuclear Risks?
10:13
Robert Miles AI Safety
Рет қаралды 96 М.
Sam Altman & Brad Lightcap: Which Companies Will Be Steamrolled by OpenAI? | E1140
53:07
The Near Future of AI [Entire Talk]  - Andrew Ng (AI Fund)
46:23
Stanford eCorner
Рет қаралды 271 М.
Jaan Tallinn Keynote: 2024 Vienna Conference on Autonomous Weapons
13:17
Future of Life Institute
Рет қаралды 704
An AI... Utopia? (Nick Bostrom, Oxford)
1:45:02
Skeptic
Рет қаралды 25 М.
КОПИМ НА АЙФОН В ТГК АРСЕНИЙ СЭДГАПП🛒
0:59
Дени против умной колонки😁
0:40
Deni & Mani
Рет қаралды 11 МЛН
#miniphone
0:16
Miniphone
Рет қаралды 3 МЛН