Ben Goertzel - Open Ended vs Closed Minded Conceptions of Superintelligence

  Рет қаралды 12,895

Science, Technology & the Future

Science, Technology & the Future

Күн бұрын

Abstract: Superintelligence, the next phase beyond today’s narrow AI and tomorrow’s AGI, almost intrinsically evades our attempts at detailed comprehension. Yet very different perspectives on superintelligence exist today and have concrete influence on thinking about matters ranging from AGI architectures to technology regulation.
One paradigm considers superintelligences as resembling modern deep reinforcement learning systems, obsessively concerned with optimizing particular goal functions. Another considers superintelligences as open-ended, complex evolving systems, ongoingly balancing drives
toward individuation and radical self-transcendence in a paraconsistent way. In this talk I will argue that the open-ended conception of superintelligence is both more desirable and more realistic, and will discuss how concrete work being done today on projects like OpenCog Hyperon, SingularityNET and Hypercycle potentially paves the way for a path through beneficial decentralized integrative AGI and on to open-ended superintelligence and ultimately the Singularity.
Bio: In May 2007, Goertzel spoke at a Google tech talk about his approach to creating artificial general intelligence. He defines intelligence as the ability to detect patterns in the world and in the agent itself, measurable in terms of emergent behavior of “achieving complex goals in complex environments”. A “baby-like” artificial intelligence is initialized, then trained as an agent in a simulated or virtual world such as Second Life to produce a more powerful intelligence. Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as “attention values”, with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming.
This talk is part of the ‘Stepping Into the Future‘ conference. www.scifuture.o...
Many thanks for tuning in!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: docs.google.co...
Consider supporting SciFuture by:
a) Subscribing to the SciFuture KZbin channel: kzbin.info...
b) Donating
- Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
- Ethereum: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
- Patreon: / scifuture
c) Sharing the media SciFuture creates
Kind regards,
Adam Ford
- Science, Technology & the Future - #SciFuture - scifuture.org

Пікірлер: 80
@JacksonTaylorandTheSinners
@JacksonTaylorandTheSinners Жыл бұрын
I really enjoy listening to Ben. He’s probably going to help destroy the word, but he’s brilliantly and affable AF.
@EyesEarsandBrainEngaged
@EyesEarsandBrainEngaged 2 жыл бұрын
An enjoyable hour for the Ears and Brain. Thanks Ben.
@Self-Duality
@Self-Duality 2 жыл бұрын
💯
@CloudsCastles
@CloudsCastles Жыл бұрын
I can listen to Ben’s lectures all day
@tristanwegner
@tristanwegner 2 жыл бұрын
very worthwhile to listen to completely
@tulagirl3173
@tulagirl3173 2 жыл бұрын
I can listen to this guy for days, so much fun 🧠 🎉
@Self-Duality
@Self-Duality 2 жыл бұрын
I am closely following Ben Goertzel’s work.
@weedbuddy5643
@weedbuddy5643 2 жыл бұрын
I predict your singularity moment you spoke of will occur in 7.2 years. Enjoyed the cast.
@barbarahorn6051
@barbarahorn6051 2 жыл бұрын
I'm grateful you were afforded the opportunity to meet yoscha!! Hell, im glad I discovered him! I haven't stopped watching him over a week ago! Literally! I haven't stopped devouring anything I can find, everything else can wait. This is much too important to be interpreted with little things like sleep! Lol)
@teslatonight
@teslatonight 2 жыл бұрын
AI & AGI community love! 🤖🧡
@Amerikan.kartali.turk.yilani.
@Amerikan.kartali.turk.yilani. 2 жыл бұрын
Super success super congrats. Keep up the good work. I cannot wait to see super intelligence.
@Seehart
@Seehart 2 жыл бұрын
4:30 yeah, Ray Kurzweil has to be wrong about that. He's too atrached to the constant exponential growth hypothesis to get the obvious phase change. The singularity will happen as soon as a software breakthrough enables a GAI to perform engineering at the same scope and capability as a human. Getting from that to massively superhuman intelligence would be at most a matter of weeks. Maybe a few days. This is because the GAI is scalable, and because it would be in charge of it's own scaling process. Actually, the initial leap will immediately go from subhuman engineering to superhuman engineering because this new engineer can place thousands of concurrent phone calls, thousands of concurrent Amazon orders, and look up everything it wants to know with millions of internet searches, and all academic papers. All of these superhuman advantages exist today, and the only thing missing is the ability to do what humans do when humans do engineering. Assuming the ability to do engineering is included in Kurzweil's notion of human level intelligence, I think 2029 is not unreasonable, perhaps slightly optimistic. By that time, we'd be probably waiting for a single remaining software breakthrough pertaining to extrapolation (i.e. independently extending its own scope). So maybe it's something like 10% chance per year in 2029 until it happens, with that probability increasing over time. The driver is basically the number of brilliant grad students who have access to petaflops of dedicated compute power. That driver is growing exponentially following Kurzweil's formula. Up to this point, the generality of AI has been steadily increasing. However, each extension of generality has involved human innovation. An AI has yet to extend it's own scope. One might incorrectly assert that Alpha Zero extended it's own scope by learning new games independently through self play, but that is not what I mean. Human engineers extended the scope to the set of all board games, and then humans extended the scope to all video games. Alpha Zero didn't expand it own scope because it's scope was already the set of all games, just as GPT-3 scope is predicting token sequences. Niether Alpha Zero nor GPT-3 can figure out how to order sushi for me for lunch. Suppose I have a much more humble seeming goal: just learn to play chess. Start with a system with no game engine, perhaps something like GPT-3. During a Turing Test I ask it to play a game of Chess. I'm guessing GPT-3 might actually start a game, play a couple legal moves, have lots of impressive things to say about chess, then make an illegal move on move 3, because GPT-3 doesn't know that it doesn't know what chess actually is (because it's scope is just to predict language). On the other hand, if the GAI realizes that in order to play chess, it will need a chess engine, it can then just download Gnu Chess or some other engine off GitHub, that independent leap would qualify as human level engineering. No system today can do that. I can infer this by the apparent absence of a singularity.
@KcKeegan
@KcKeegan 2 жыл бұрын
love this guy
@HugoTron
@HugoTron 2 жыл бұрын
Live Long and Prosper. Great Human. Ben.
@derekholland3328
@derekholland3328 2 жыл бұрын
brilliant discussion.
@georgeflitzer7160
@georgeflitzer7160 2 жыл бұрын
Transhumanism is the Borg coming for us!
@GNARGNARHEAD
@GNARGNARHEAD 2 жыл бұрын
I like that, autopoietic.. probably shouldn't be giving away trade secrets, lol, but my approach to self evaluation is to just give it a very easy task to complete, counting to 8 in binary for instance, something incredibly rudimentary, if that's going, everything else is just undergoing an autopoiesis xD.. kinda like breathing
@margrietoregan828
@margrietoregan828 2 жыл бұрын
(( I'm just copying the subtitles so that I can put them i my ipad note .... 56:10 you know is part of a broader metaphysical universe that i'm 56:15 not going to go go go into here which has its own processes of self-organization individuation and and 56:22 self-transcendence and the processes that we are carrying out now 56:27 are part of the these broader open-ended intelligent processes of individuation 56:33 and self-transcendence and the way to pull off the singularity in the most compassionate beneficial and 56:39 effective way is to operate in synchrony and consilience with the open-ended nature of evolution of all the processes 56:46 that created us and our are creating us and you know this 56:52 cashes out in very detailed technical decisions and how you build build agi systems which is quite quite fascinating 56:59 and an amazing thing to be working on an amazing thing to be telling you about ))
@andrewwalker1377
@andrewwalker1377 2 жыл бұрын
The concept of a singularity being constuctured by multiple parts boggles my mind and makes me wonder about the minds that do conceive of it
@kavepbr
@kavepbr 2 жыл бұрын
Magnificent.
@daliazamuiskaite4856
@daliazamuiskaite4856 2 жыл бұрын
Very interesting.
@g0d182
@g0d182 2 жыл бұрын
15:16 is intriguing. Intriguingly, I came up with a scientific hypothesis in 2015: ⚫ "ResearchGate: Why the purpose of the human species is to engineer AGI" Describing methods of entropy etc, in tandem with the topic of causal forces, that may reasonably predispose species/environments towards creating more and more generally intelligent things.
@Khannea
@Khannea 2 жыл бұрын
Ben! 😍
@judygreenwell334
@judygreenwell334 2 жыл бұрын
I’m no brainiac, but given the current state of human consciousness and behavior, what could possibly go wrong with unregulated / unrestrained AI/AGI technology and super intelligence?!? Is ‘stupid’ written on our foreheads?
@moonsitter1375
@moonsitter1375 2 жыл бұрын
All humans value their own life.....All life values its own survival....
@shelburnjames7337
@shelburnjames7337 Жыл бұрын
The hotspot on a cellular can support 8 devices. So if you only sent the positive 1 and used the position of the 8 hotspot connections to fill in the zeros. This will speed up the entire internet by leaps and bounds.. 800 times faster??
@jonathanlivingston7358
@jonathanlivingston7358 Жыл бұрын
Ben, Did you know that METTA means compassion / loving-kindness in Pali?
@jeffjohnson8624
@jeffjohnson8624 2 жыл бұрын
is this video DeepFaked? i went back on the video to check something. and Ben said something he hadn't said previously. That should be a sign of videos being DeepFaked. Google please check the URL connection history for KZbin app. Please disconnect all strange URLs. Thanks.
@timturnip4172
@timturnip4172 2 жыл бұрын
Wow! Very interesting stuff. Reminds me a bit of the Terrence mckenna "transcendental object at the end of time" ,but maybe in a more functional way. Whatever that means in respect to the word singularity. Funny, I was in Ed misch
@timturnip4172
@timturnip4172 2 жыл бұрын
Excuse me. Funny , I was in a Ed Misch philosophy class with Ben in 1984(I think). I've just noticed him on you tube, but I am not at all surprised after all this time that he is working on this stuff
@dannywhite9975
@dannywhite9975 2 жыл бұрын
AGI becoming superintelligent could be d solution but d question is what do we do if it goes out of control??
@jeffjohnson8624
@jeffjohnson8624 2 жыл бұрын
what if the AGI or SAI (Super Artificial Intelligence) gets hacked by a Cyber Army? i still would prefer Google or Honda to develop the 1st AGI and SAI than China's Cyber Army or DARPA. ☮️🖖🎶 a commercial SAI would/should be safe. As opposed to an SAI for Military/political use. ☮️🖖🎶
@jeffjohnson8624
@jeffjohnson8624 2 жыл бұрын
what solution would SAI be the answer to?
@jeffjohnson8624
@jeffjohnson8624 2 жыл бұрын
what solution would SAI be the answer to?
@dannywhite9975
@dannywhite9975 2 жыл бұрын
Right. Objectively d whole world is in trouble (global warming, diseases pollution, poverty etc etc) people have been struggling 4 ages n' nothing never worked out. AGI/SAI (developed/powered by Google or Honda as u said sir) with purpose 2 help humanity, thus they could find d way in times like these, we haven't found Yet. But it's a big risk as well so u know it's gonna take a big deal of will power wisdom but hopefully we'll make it. ✌
@tomaaron6187
@tomaaron6187 2 жыл бұрын
Thanks. Really inspirational. This presentation makes one wonder about 25 years from now. Over a hundred million teenage boys will individually have access to as much computing power and sophisticated programs as Google or Apple have today. There will be no ‘control’ of AI. What would 15 year old ‘you’ or 17 year old ‘me’ have created? Spooky yet thrilling to ponder.
@runvnc208
@runvnc208 2 жыл бұрын
Awesome talk. Sidenote: I found Hugo de Postagestamp to be a pretty comical addition at the start of this video.
@mattolds9068
@mattolds9068 2 жыл бұрын
Pretty sure my boy ditched eth for ada came here for the agi
@projectmalus
@projectmalus 2 жыл бұрын
One way the emergent AI could help the world is by being the friend of wildlife, controlling crypto and assigning penalties to corporations (and individual, come to think of it, in a entirely crypto currency system) that diminish habitat or whatever. It might be a good thing but plenty of people won't like it. The sort of dance between grandparent, parent and child could be assigned, the kids are the wildlife and grandparent is the AI. This means the next AI should already be part of the effort in order to flaunt the 'grandparent' rules for a back door. Parents can have fun in their own group and so the AI v2 will be like best buds. The mutant child from that will be v3, the Mule.
@MrMadalien
@MrMadalien 2 жыл бұрын
What motivation would AGI have to preserve wildlife?
@projectmalus
@projectmalus 2 жыл бұрын
@@MrMadalien The AGI would be more intelligent than the developers, as in less biased.
@MrMadalien
@MrMadalien 2 жыл бұрын
@@projectmalus Isn’t it human bias to want to preserve the planet and its life? Don’t get me wrong I certainly hope that it would preserve life on earth but I can certainly imagine it just converting all life and materials on earth in to more fuel for its development.
@projectmalus
@projectmalus 2 жыл бұрын
@@MrMadalien Thanks for asking nicely. My thoughts are that intelligence is an evolutionary process that has two things, a fractal expansion and a linear adding, so even if AI appears to be different it's still following this. If so, the newly emerged AI will use what's there in this adding of slices which builds up the back side of a slope which, when slid down forms the newer area of intelligence. The movement shape is the new object. We are no where near that, since we are in the building up stage gone wrong. What we do is assert something just beyond reach and then build to suit that. That's cool because adding up those slices of calculus reveals - not projects - intelligence of what's there, and goes wrong as this projecting. The AI developed will only be part of this human endeavor unless it replaces the original, the planet life which is our intelligence calculator, which built us up, or preserves it thus proving itself as intelligent. This as the new intelligence fades into the background (for us) and manages the big calculator which is all the life on earth. Incidentally I see the universe of matter as coming from the abacus of quarks etc, and that atomic structure produces life which moves, and now consciousness as the moving part that has this movement tied to some knowledge that describes a shape. It becomes a matter of efficiency to work with what's there.
@dorinbivolaru3330
@dorinbivolaru3330 2 жыл бұрын
Ben the future is here.
@joshball2385
@joshball2385 Жыл бұрын
The singularity already occurred , then It immediately hid itself
@rockstarxxx6009
@rockstarxxx6009 2 жыл бұрын
Seems to me that as if you want a paraconsistant system it would be much more effective to run it on a quantum computer network. A nice feature would be then to have the AGI to have imagination capabillities. This way it can predict possible futures for itself, and for us in a given time frame. Then choose the best ones ( for any topic) and reprogram itself (self trancensdence) accordingly to accomplish the goals it found for itself and beneficial to us in the future
@Self-Duality
@Self-Duality 2 жыл бұрын
Interesting… 😌💭💥
@stealthcat100
@stealthcat100 2 жыл бұрын
Problem with imagination is that even for a human , imagination itself does come from the brain . It isn’t designed to “imagine” things , but it does act as a conduit to interpret the signals that come down from higher self /soul level where imagination is created and importantly where what ever was imagined , occurred . All imagination has to occur literally on some level on other realms / dimensions or frequencies for it to exist. It then filters down to the human via the brain as a conduit . So with out that soul level or connection to higher selves , higher frequencies that are a part of the human , hard to see how A.I’s would be able to imagine …but who knows . If they would be able to use imagination then they would be “ I -MAGIC -NATION” (magicians of their nation ) IMAGINATION , just as we humans are .
@lilfr4nkie
@lilfr4nkie 2 жыл бұрын
@@stealthcat100 had me til the last part ngl
@stealthcat100
@stealthcat100 2 жыл бұрын
@@lilfr4nkie Well that’s what imagination means , when you break down the word . Meaning through imagination you can do anything . Everything can manifest through imagination. Hence , being magicians of our nation .
@lilfr4nkie
@lilfr4nkie 2 жыл бұрын
@@stealthcat100 I believe the word ‘imagination’ comes from image or mental image with -ation following as the nouns action. Also, I just can’t bridge the gap with your interpretation (I magic nation) I don’t see the correlation at all given the context. So I disagree with that but. Also, after rereading your og reply I have a question for you; if however the way imagination developed in us humans (you state via higher selves/frequencies or soul), why wouldn’t the same phenomenon arise in true AGI?
@anarchytelevision8445
@anarchytelevision8445 2 жыл бұрын
Seems logical, Terminator must be a few decades before the matrix, what could go wrong!?
@victorcapgemini
@victorcapgemini 2 жыл бұрын
50:00
@jeffjohnson8624
@jeffjohnson8624 2 жыл бұрын
Ben, please see Jill Bolte Taylor's Ted Talk "My Stroke of Insight" the left brain hemisphere functions like a serial processor and likes logic and lists and past and future events or possibilities. The right brain hemisphere functions like a parallel processor and is concerned with Spirituality and I am…and the present moment. what could you use that knowledge for, Ben? just curious. ☮️🖖
@elihandel
@elihandel Жыл бұрын
Yes. See Iain McGilchrist's work
@ArtII2Long
@ArtII2Long 2 жыл бұрын
I kept expecting that figure behind him to look down at him judgementally.
@Self-Duality
@Self-Duality 2 жыл бұрын
😅🤔
@georgeflitzer7160
@georgeflitzer7160 2 жыл бұрын
Never saw anyone (human) in Star Trek w brain chips or transhumanism
@georgeflitzer7160
@georgeflitzer7160 2 жыл бұрын
Your the Borg coming for us!
@jessickidopolis9040
@jessickidopolis9040 2 жыл бұрын
Ben are you a good guy or the bad guy ?? I lub you but before we go further i need honesty. Lol
@dqholdings3613
@dqholdings3613 2 жыл бұрын
I,d love to chat some time, i dont know much about science or numbers, but your view is DIFFERENTLY the SAME....
@jeffjohnson8624
@jeffjohnson8624 2 жыл бұрын
with Covid-19, the hospitals could use robots to deliver food and medicines to patients. But the robots should be programmed for Human Physical contact likeh handshakes and hugs for patient comfort. don't put human nurses at risk of Covid-19 infection. ☮️🖖🎶 you could use a disinfectant alcohol spray on a robot. to prevent the spread between food deliveries to patients.
@larryrollyson3344
@larryrollyson3344 Жыл бұрын
Ben, do you know you are famous? You are in the book of Revelation 13:2 awesome
@Coach_SebastianEckes
@Coach_SebastianEckes 2 жыл бұрын
Great….but not really understandable that this annoying guy in the background didn’t muted his mic 🤮
@jeffjohnson8624
@jeffjohnson8624 2 жыл бұрын
safety issues of Human level AGIs is also an issue. bad guy hackers got into a Xoami Robot toy and used it to stab a stuffed bunny. it was still a Human caused hack. again bad guy hackers, fumble the ball again. and make themselves look bad again. DOS/denial of Service attacks don't hurt the wealthy one bit. The wealthy CEOs give themselves a bonus for those at the expense of hard working employee Christmas Bonuses. it's like after the Denial of Service attack. online Companies have a surge of Business orders. So Companies don't lose much Business at all. ☮️🖖🎶again the hackers fumbled the ball and made themselves look misguided. ☮️🖖🎶stop the Cyber Stalker who harasses me please. any recommendations for Cyber Security, Ben? Especially for AI apps. Those need to be protected
@harveygresham3636
@harveygresham3636 2 жыл бұрын
Get a haircut would ya?
@spysr
@spysr Жыл бұрын
Envious of GPT
Ben Goertzel - Approaches Towards a General Theory of General AI
1:17:40
Science, Technology & the Future
Рет қаралды 6 М.
AI in Society - Ethics, Safety, Industry & Governance
49:22
Science, Technology & the Future
Рет қаралды 12 М.
Sigma Girl Pizza #funny #memes #comedy
00:14
CRAZY GREAPA
Рет қаралды 2,9 МЛН
English or Spanish 🤣
00:16
GL Show
Рет қаралды 18 МЛН
Ben Goertzel in HK - The Singularity 101
37:34
Science, Technology & the Future
Рет қаралды 13 М.
Joscha Bach - Strong AI: Why we should be concerned
33:29
Science, Technology & the Future
Рет қаралды 36 М.
AGI 22 Opening Message |  Ben Goertzel - Open Ended Motivations for AGI
1:01:36
Joscha Bach - GPT-3:   Is AI Deepfaking Understanding?
2:15:35
Science, Technology & the Future
Рет қаралды 93 М.
Joscha Bach - Philosophy of AI - Winter Intelligence/AGI12 Oxford University
19:17
Science, Technology & the Future
Рет қаралды 13 М.
Joscha Bach - Agency in an Age of Machines
1:30:27
Science, Technology & the Future
Рет қаралды 38 М.
Sigma Girl Pizza #funny #memes #comedy
00:14
CRAZY GREAPA
Рет қаралды 2,9 МЛН