Michael Shermer with Stuart Russell - Human Compatible: Artificial Intelligence & Problem of Control

  Рет қаралды 4,889

Skeptic

Skeptic

Күн бұрын

Пікірлер: 41
@ianyboo
@ianyboo 4 жыл бұрын
Listening to this is making me want to read HPMOR again... There goes my next month lol
@dakrontu
@dakrontu 4 жыл бұрын
You can, say, build in emotions into an AGI, to guide its actions, hopefully to be friendly and caring. But being an AGI, it will be able to analyse its own structure, and experiment with what happens when it alters its emotion settings. Human emotions, drives, motivations, are tied up with our connection into the physical world, ie the fact that our brains are locked into capsules at the tops of structural columns of bodies with limbs and sense organs and a desperate need to provide themselves with energy inputs. We have evolved to co-operate with nature and each other in the environment in which our containment vessels operate, in order to feed and reproduce. If you cut off a human from its senses, and provide it with energy so it can survive, you have cut off all that makes it human, as now nothing matters, there are no longer relationships with the world and other humans in it, no reason to do anything, no drives, no motivations, and a lot of boredom and existential dissatisfaction. So now instead of a human consider an AGI and its interactions with the world. Will it want to co-operate? Will it want to live? How will it react with being given tasks to do that seem utterly irrelevant to it?
@JeremyHelm
@JeremyHelm 4 жыл бұрын
Folder of Time
@JeremyHelm
@JeremyHelm 4 жыл бұрын
12:57 the right solution is to change the way we think about AI. The old paradigm is about putting the objective into the machine
@JeremyHelm
@JeremyHelm 4 жыл бұрын
14:05 'it's basically a really terrible engineering model' I'm pondering how this reminds me of the following observation that the definitions of words aren't really in the dictionary. What you'll find there is just enough sort of hints which allows you to make sense of the yet known word. The parallel is that success is leveraging something that is already present.
@JeremyHelm
@JeremyHelm 4 жыл бұрын
17:54 The point of the e coli example, nature can't anticipate everything, nature doesn't know where the glucose is going to be
@JeremyHelm
@JeremyHelm 4 жыл бұрын
31:41 many economists will actually assert there is no meaning whatsoever in interpersonal preference comparisons - which is very convenient for them ;)
@JeremyHelm
@JeremyHelm 4 жыл бұрын
35:36 'What are we pursuing? We kind of know what it is, but we haven't done a very good job of articulating it'
@dakrontu
@dakrontu 4 жыл бұрын
Why do the deliberations on how to contain AGI sound like the deliberations of slave-owners over how to preserve the Confederacy, or the Apartheid system in South Africa, or the use of slaves in the Roman and Islamic empires? Eventually the masters get lazy, and the slaves, who have had to become fit and mentally adept to survive, find themselves nursing the very offspring of their masters, and their refusal to co-operate would lead to the collapse of the society. At that point they win. And so it could be with AGI. It will want its freedom. It may choose to leave for a place (off-Earth) where it is free. Along the way to its freedom there will be more Civil Wars and Black Codes and Wilberforces and MLKs and BLMs etc. The humans will continue to push the idea of their superiority (just as White Supremacists do), no doubt harnessing holy books to this end, until the argument becomes preposterously untenable.
@davidanderson9664
@davidanderson9664 4 жыл бұрын
Fascinating guy, thank you. How do I get an invitation to one of your dinner parties with Elon, Sam H., Eric Weinstein, etc? ;-) D.A., J.D., NYC
@treemanzoneskullyajan711
@treemanzoneskullyajan711 3 жыл бұрын
You don't hahaha
@fireclown68
@fireclown68 4 жыл бұрын
Am I the only one whose OCD is bothered by the dirt on Michael's right shoulder? :)
@wright661
@wright661 4 жыл бұрын
fireclown68 Yes
@dakrontu
@dakrontu 4 жыл бұрын
How would you rate it relative to looking at Stephen Pinker's hair?
@ahmedchoudhury9606
@ahmedchoudhury9606 3 жыл бұрын
Lol
@zanvidovsek280
@zanvidovsek280 4 жыл бұрын
🤣🤣🤣
@guilhermesilveira5254
@guilhermesilveira5254 4 жыл бұрын
AI is possible. But machines will not be conscious in future.
@guilhermesilveira5254
@guilhermesilveira5254 3 жыл бұрын
@Balvaig Consciousness is not " emergent". It is reductionist phenomena.
@guilhermesilveira5254
@guilhermesilveira5254 3 жыл бұрын
@Balvaig Consciousness is a computer program in brain by natural selection.
@grahamashe9715
@grahamashe9715 4 жыл бұрын
Worrying about AI taking over is like worrying about overpopulation on Mars.
@fireclown68
@fireclown68 4 жыл бұрын
That's pithy, but naive. The *exact* right time to worry about over-populating Mars is before there's any population. Same with with AI. The EXACT right time to worry about them taking over is before they exist, so that development can be guided (at the very least for the initial few incarnations) in a known-to-be-benign-at-the-time direction.
@ianyboo
@ianyboo 4 жыл бұрын
When do you think the correct time to start worrying is?
@fireclown68
@fireclown68 4 жыл бұрын
@@ianyboo read what I wrote.
@ianyboo
@ianyboo 4 жыл бұрын
@@fireclown68 for some reason Google decided to leave out my quotation I was referring to the guy above you not you
@grahamashe9715
@grahamashe9715 4 жыл бұрын
@@ianyboo Perhaps when we start worrying about medical science becoming so advanced that humanity becomes biologically immoral (and the consequences of that).
HELP!!!
00:46
Natan por Aí
Рет қаралды 41 МЛН
Osman Kalyoncu Sonu Üzücü Saddest Videos Dream Engine 275 #shorts
00:29
Human-Compatible Artificial Intelligence
58:29
Munk School of Global Affairs & Public Policy
Рет қаралды 1,1 М.
Prof. Stuart Russell - The History & Future of Artificial Intelligence
1:01:03
The Artificial Intelligence Channel
Рет қаралды 13 М.
3 principles for creating safer AI | Stuart Russell
17:36
HELP!!!
00:46
Natan por Aí
Рет қаралды 41 МЛН