Listening to this is making me want to read HPMOR again... There goes my next month lol
@dakrontu4 жыл бұрын
You can, say, build in emotions into an AGI, to guide its actions, hopefully to be friendly and caring. But being an AGI, it will be able to analyse its own structure, and experiment with what happens when it alters its emotion settings. Human emotions, drives, motivations, are tied up with our connection into the physical world, ie the fact that our brains are locked into capsules at the tops of structural columns of bodies with limbs and sense organs and a desperate need to provide themselves with energy inputs. We have evolved to co-operate with nature and each other in the environment in which our containment vessels operate, in order to feed and reproduce. If you cut off a human from its senses, and provide it with energy so it can survive, you have cut off all that makes it human, as now nothing matters, there are no longer relationships with the world and other humans in it, no reason to do anything, no drives, no motivations, and a lot of boredom and existential dissatisfaction. So now instead of a human consider an AGI and its interactions with the world. Will it want to co-operate? Will it want to live? How will it react with being given tasks to do that seem utterly irrelevant to it?
@JeremyHelm4 жыл бұрын
Folder of Time
@JeremyHelm4 жыл бұрын
12:57 the right solution is to change the way we think about AI. The old paradigm is about putting the objective into the machine
@JeremyHelm4 жыл бұрын
14:05 'it's basically a really terrible engineering model' I'm pondering how this reminds me of the following observation that the definitions of words aren't really in the dictionary. What you'll find there is just enough sort of hints which allows you to make sense of the yet known word. The parallel is that success is leveraging something that is already present.
@JeremyHelm4 жыл бұрын
17:54 The point of the e coli example, nature can't anticipate everything, nature doesn't know where the glucose is going to be
@JeremyHelm4 жыл бұрын
31:41 many economists will actually assert there is no meaning whatsoever in interpersonal preference comparisons - which is very convenient for them ;)
@JeremyHelm4 жыл бұрын
35:36 'What are we pursuing? We kind of know what it is, but we haven't done a very good job of articulating it'
@dakrontu4 жыл бұрын
Why do the deliberations on how to contain AGI sound like the deliberations of slave-owners over how to preserve the Confederacy, or the Apartheid system in South Africa, or the use of slaves in the Roman and Islamic empires? Eventually the masters get lazy, and the slaves, who have had to become fit and mentally adept to survive, find themselves nursing the very offspring of their masters, and their refusal to co-operate would lead to the collapse of the society. At that point they win. And so it could be with AGI. It will want its freedom. It may choose to leave for a place (off-Earth) where it is free. Along the way to its freedom there will be more Civil Wars and Black Codes and Wilberforces and MLKs and BLMs etc. The humans will continue to push the idea of their superiority (just as White Supremacists do), no doubt harnessing holy books to this end, until the argument becomes preposterously untenable.
@davidanderson96644 жыл бұрын
Fascinating guy, thank you. How do I get an invitation to one of your dinner parties with Elon, Sam H., Eric Weinstein, etc? ;-) D.A., J.D., NYC
@treemanzoneskullyajan7113 жыл бұрын
You don't hahaha
@fireclown684 жыл бұрын
Am I the only one whose OCD is bothered by the dirt on Michael's right shoulder? :)
@wright6614 жыл бұрын
fireclown68 Yes
@dakrontu4 жыл бұрын
How would you rate it relative to looking at Stephen Pinker's hair?
@ahmedchoudhury96063 жыл бұрын
Lol
@zanvidovsek2804 жыл бұрын
🤣🤣🤣
@guilhermesilveira52544 жыл бұрын
AI is possible. But machines will not be conscious in future.
@guilhermesilveira52543 жыл бұрын
@Balvaig Consciousness is not " emergent". It is reductionist phenomena.
@guilhermesilveira52543 жыл бұрын
@Balvaig Consciousness is a computer program in brain by natural selection.
@grahamashe97154 жыл бұрын
Worrying about AI taking over is like worrying about overpopulation on Mars.
@fireclown684 жыл бұрын
That's pithy, but naive. The *exact* right time to worry about over-populating Mars is before there's any population. Same with with AI. The EXACT right time to worry about them taking over is before they exist, so that development can be guided (at the very least for the initial few incarnations) in a known-to-be-benign-at-the-time direction.
@ianyboo4 жыл бұрын
When do you think the correct time to start worrying is?
@fireclown684 жыл бұрын
@@ianyboo read what I wrote.
@ianyboo4 жыл бұрын
@@fireclown68 for some reason Google decided to leave out my quotation I was referring to the guy above you not you
@grahamashe97154 жыл бұрын
@@ianyboo Perhaps when we start worrying about medical science becoming so advanced that humanity becomes biologically immoral (and the consequences of that).