Andrea Miotti on a Narrow Path to Safe, Transformative AI

  Рет қаралды 1,299

Future of Life Institute

Future of Life Institute

Күн бұрын

Пікірлер: 4
@davek93
@davek93 3 ай бұрын
Andrea is a rockstar!
@OriNagel
@OriNagel 3 ай бұрын
Thank you, a plan like this is sorely needed!! I'm especially curious how the definition of superintelligence gets scoped out and turned into a redline.
@plm203
@plm203 4 күн бұрын
Thank you. Regarding intelligence i would say that the definition must be variable. Already in humans psychometrics consider various dimensions of intelligence, even without considering "emotional intelligence". The forms of intelligence most relevant depend on the context. The most general meaningful definition of intelligence is power density, or energy rate density - as the definition of work is usually too restricted in physics. Chaisson has shown how this "intelligence" increases with the age of the universe, with the level of organization - with societies being more intelligent than humans, themselves more intelligent than a star for instance. Now if we demand animal-like intelligence one must add a notion of reflexivity, which is provided by the recurrent structure of natural neural networks. My conclusion is that coming up with notions of intelligence is easy, it is only because researchers lack breadth and work in very specialized area, that they cannot understand that there is no mystical choice of a definition. Also, i think it has to do with the view of humans as somehow supernatural: our brains don't do anything supernatural, they are just parallel and reflexive (and big enough); and we are coming to see that we can reproduce their performance on any sufficiently powerful discrete computer. PS: Regarding safety the discussion here is not very realistic. Bascially all nations except for suicidal ones like european ones will never abide by rules that would put them at a disadvantage. Like for nuclear weapons. You can claim everybody loses from owning nuclear weapons but that is not true, especially as long as you have nations like the US or EU, that pretend to govern others. The single most urgent thing the EU could do to promote AI safety is to stop intervening in other nations' politics so that those nations need not defend themselves. But superhuman AI will be easy to develop, much easier than nukes, which is why people are so worried. So the only reasonable way for EU is to be ahead of the competition. We will always have too strong a regulatory pressure. Most people in the AI safety milieu totally fool themselves: they imagine living in a perfect world, they have one sledgehammer, lobbying Brussels and Washington, and they imagine that all troubles with AI safety are the EU and Washington nails. It is actually quite caricatural that Andrea Miotti says that if Russia has superintelligence we will die: let me remind him that the only nation that ever used nukes on civilians is the US. Reality is, beyond that cruel mass killing of civilians, the US has been the most criminal and dangerous nation of the past 50 years at least. So im back to what i was saying: western goodthinkers like Andrea Miotti are the main danger contributing to AI unsafety, not as a thinker, but as a globalist demanding power over all nations, bypassing their sovereignty, and treating all those who are different as evils, that must be eliminated. PPS: Quite dismaying to see a young intelligent person, only focused on hindering its nation - because he sure will never hinder the US, China, India,... or the russians he seems to consider subhumans.
@keizbot
@keizbot 3 ай бұрын
We need to stop letting companies trade our safety for their potential profit
Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents
1:30:30
Future of Life Institute
Рет қаралды 1,2 М.
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 42 МЛН
Мен атып көрмегенмін ! | Qalam | 5 серия
25:41
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН
WE GOT ACCESS TO GPT-3! [Epic Special Edition]
3:57:17
Machine Learning Street Talk
Рет қаралды 387 М.
The Secret To Spontaneous English Speech
1:20:41
EnglishAnyone
Рет қаралды 67 М.
Stephen Wolfram on Observer Theory
2:00:41
Wolfram
Рет қаралды 142 М.
Deep Learning: A Crash Course (2018) | SIGGRAPH Courses
3:33:03
ACMSIGGRAPH
Рет қаралды 3,3 МЛН
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Machine Learning Street Talk
Рет қаралды 381 М.
The A.I. Dilemma - March 9, 2023
1:07:31
Center for Humane Technology
Рет қаралды 3,5 МЛН
Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters
1:09:27
CompTIA Network+ Certification Video Course
3:46:51
PowerCert Animated Videos
Рет қаралды 9 МЛН
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 42 МЛН