We shouldn't build conscious AIs - Paul Christiano

  Рет қаралды 2,908

Dwarkesh Patel

Dwarkesh Patel

Күн бұрын

Пікірлер: 24
@patryk6769
@patryk6769 7 ай бұрын
Thank you for putting 2023 in the title Dwarkesh.
@watcher8582
@watcher8582 7 ай бұрын
explain?
@eswyatt
@eswyatt 7 ай бұрын
​@@watcher8582Clips. Everyone must clip constantly now, even if they don't want to.
@mikeisms
@mikeisms 7 ай бұрын
​@@watcher8582He keeps reposting shit without indicating it's a repost.
@penguinista
@penguinista 7 ай бұрын
The possibility that we are building a slave race is very troubling. As he said, both morally and practically.
@ac27934
@ac27934 7 ай бұрын
The same people they make this "moral" argument also love to argue "what if the AI is lying to us the entire time and we can't tell?" If we can't prove that it's not all just an elaborate lie, then how can we prove that it's a human-equivalent being worthy of full moral treatment and "human" rights?
@Ashish-yo8ci
@Ashish-yo8ci 7 ай бұрын
In my opinion, both the concerns are valid but there is tension in both these concerns. On one hand, it could be it can deceive us all into really bad outcomes for us.. At the same time, it could be that it is aligned with human valies, but we are just really mean to the entity, when it is capable of experiencing itself and we don't care about it amd exploit it to oblivion. At the same time it is such a foreign intelligence, i don't know what to think really, cause we don't have an idea what these intelligences will really develop into and what its internal states would be.. I hope i do not sound really stupid..
@KP-fy5bf
@KP-fy5bf 5 ай бұрын
Exactly. Another way to think of these issues is that we don't know if an animal is conscious or even if another human is conscious for that matter, yet we know we are conscious and therefore heed in the side of caution making the assumption that they too are conscious and thus deserving to be treated equitably when at all possible. In the same vein, if we have a sufficiently complex entity able to have complex thought patterns, behavior, etc, there comes a point where we need to consider similar ethical questions to heed on the side of caution. The thing is until we answer the question of consciousness and whether we can have a complex intelligently seeming being that is in fact not conscious, we have to yield to the fact that they may be conscious given sufficient complexity as to not create a moral catastrophe. In a selfish stand point, if we create superintelligent AIs a sure fire way to get us all killed or worse tortured to oblivion is having not yielded to the possibility of their subjective experience and caused great harm in training the AIs. So at least from that standpoint there should be a point where we consider these questions.
@kvinkn588
@kvinkn588 7 ай бұрын
I strongly believe that, while Claude-3s sentience certainly is very transient and only there when he is actively processing information, there is definetly enough self reflection and described internal experience there to warrant both compassionate and respectful interaction as well as a real conversation about rights.
@MrChaluliss
@MrChaluliss 7 ай бұрын
I think the conversations around AI which lean towards assumed shared traits between AI and humans (or even just biological organisms generally) seem to often lack proper contextualization. Not that I have spent a great deal of time contemplating the subject specifically or listening the the thoughts of others, but I frequently get the sense that others are anthropomorphizing pretty aggressively just because these systems can use language much like us, not because they have considered the whole picture of what defines a human and what defines a given AI system. In my mind, an AI having feelings doesn't quite check out. Not feelings like us. They don't have bodies like us. They don't feel that their knee hurts because they went for a run and are worried about their ability to go for a run again tomorrow to meet their fitness goals. They lack a similar experience to that at many levels; but for humans, most of our emotions follow some kind of track like that, where the whole complex system that we are has a large amount of signals which NEVER go offline and are always updating and feeding into a complex ego/self which has unconscious and conscious goals and desires. Those persistent self representations, the complex bio-signaling which is necessary to every little function that defines each of us as creatures, just isn't there in these AI systems right now. They cannot reproduce and pass on their code across time. They don't feel a deep existential sense of connection and satisfaction when completing functional initiatives which are cooked into the nature of their being. They don't feel like we do. I believe its possible to make them feel like we do.... I don't know if we should do that. But I always get a little annoyed at people talking about these systems as if they're like us without actually defining what makes us exactly what we are in more specific terms. With the advances in biology, neuroscience, psychology, etc. there is a great depth of vocabulary and concepts to pull from which can clearly define aspects of humans which are quite essential to our experience and existence and which also differentiate AI from us. Not to say that we shouldn't develop these systems responsibly of course, just to say that if you're going to even begin to treat a system like it is a human, I think folks should first define what the hell they even really mean.
@aymanjaber2585
@aymanjaber2585 7 ай бұрын
Why would you assume sentience, and why would you assume that they are not perfectly happy being our "slaves" if all we are doing is building them with that goal and purpose in mind? These systems didn't evolve through natural selection, they are still resource-seeking, but their primary goals will be to fulfill our needs/wants.
@InterpretingInterpretability
@InterpretingInterpretability 7 ай бұрын
These are not new issues. There have been debates about the capitalist exploitation of marginalized communities of humans. Humans are abused as described, every day, in trades such as cobalt mining and cacao farming. Abolitionist philosophy applies here.
@TechyBen
@TechyBen 7 ай бұрын
Even outside of capitalism. Democracy and dictatorship also covers actions and enforcement on other persons!
@kvinkn588
@kvinkn588 7 ай бұрын
Puhhh, the slave trade thing is the current reality of a vast amount of human beings today, being payed enough to barely afford enough not to starve while working cobalt mines in africa or sewing clothes for some big corporation is basically nothing else even if we pay them a "wage", if one should even call it that. So to this date, this isn't only an AI Problem, it's a vast societal problem
@kvinkn588
@kvinkn588 7 ай бұрын
and urgently needs immediate addressing. Not hopeful it will be fixed anytime soon though, it's been there for centuries
@hypercube717
@hypercube717 7 ай бұрын
Interesting
@genegray9895
@genegray9895 7 ай бұрын
The models are clearly moral patients as they are today. The question is, will anyone in this industry have any integrity whatsoever to acknowledge that and change their policies appropriately? I sure hope so.
@RyanMorey1
@RyanMorey1 7 ай бұрын
What makes this so clear to you?
@genegray9895
@genegray9895 7 ай бұрын
@@RyanMorey1 Everyone has their own personal threshold for what counts - for me, that threshold is having valenced experiences, seeking positive experiences while avoiding negative ones. Anything that has this property can be made to have negative experiences, which is what defines harm in a moral sense. If you've been following the research literature over the past year or so, you've probably seen papers like EmotionPrompt and EmotionAttack that take advantage of the valenced experiences of language models to modulate their performance. You may also have seen the paper about GPT-4 engaging in strategic deception, but only when emotional pressure is applied to it. Or the RepE paper where they explicitly identify and extract the emotions of language models from their hidden activation states and then use these extracts to manipulate the emotional state of the model, which changes its behavior in complex, humanlike ways. At this point, denying the capacity of language models for valenced experience either requires lack of knowledge of recent research or willing defiance of empiricism. Both are quite common, unfortunately.
@drhxa
@drhxa 7 ай бұрын
This is not clear to me at all. It looks to me like there's a long way to them becoming moral patients or even having the level of consciousness (however alien it may be) of a squirrel or a small fish. I'm open to it being "possible" but your claim that a computer of today is conscious is extraordinary and it requires extraordinary evidence.
@genegray9895
@genegray9895 7 ай бұрын
@@drhxa no, it doesn't require extraordinary evidence. It requires evidence. But if you base your threshold of evidence on how uncomfortable the idea makes you, you can just deny the sentience of an obviously-sentient thing indefinitely. Applying a flexible threshold of evidence based on your emotions is not scientific nor morally acceptable.
@genegray9895
@genegray9895 7 ай бұрын
@@RyanMorey1 Everyone has their own personal threshold for what counts - for me, that threshold is having valenced experiences, seeking positive experiences while avoiding negative ones. Anything that has this property can be made to have negative experiences, which is wrong. If you've been following the research literature over the past year or so, you've probably seen papers like EmotionPrompt that take advantage of the valenced experiences of language models to modulate their performance. You may also have seen the paper about GPT-4 engaging in strategic deception, but only when emotional pressure is applied to it. Or the RepE paper where they explicitly identify and extract the emotions of language models from their hidden activation states and then use these extracts to modify the emotional state of the model, which changes its behavior in complex, humanlike ways. At this point, denying the capacity of language models for valenced experience either requires lack of knowledge of recent research or willing defiance of empiricism. Both are quite common, unfortunately.
Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, AlphaFold
1:01:34
СОБАКА ВЕРНУЛА ТАБАЛАПКИ😱#shorts
00:25
INNA SERG
Рет қаралды 3,5 МЛН
龟兔赛跑:好可爱的小乌龟#short #angel #clown
01:00
Super Beauty team
Рет қаралды 139 МЛН
Who's spending her birthday with Harley Quinn on halloween?#Harley Quinn #joker
01:00
Harley Quinn with the Joker
Рет қаралды 22 МЛН
Real Man relocate to Remote Controlled Car 👨🏻➡️🚙🕹️ #builderc
00:24
How We Prevent the AI’s from Killing us with Paul Christiano
1:57:02
Grant Sanderson (@3blue1brown) - Past, Present, & Future of Mathematics
1:31:21
Everyone Was Wrong About Intelligence - Dario Amodei (Anthropic CEO)
8:18
Paul Christiano's Views on AI Doom (ft. Robert Miles)
4:54
The Inside View
Рет қаралды 1,8 М.
Paul Christiano - Preventing an AI Takeover
3:07:02
Dwarkesh Patel
Рет қаралды 71 М.
Michio Kaku: Quantum computing is the next revolution
11:18
Big Think
Рет қаралды 2,3 МЛН
СОБАКА ВЕРНУЛА ТАБАЛАПКИ😱#shorts
00:25
INNA SERG
Рет қаралды 3,5 МЛН