Superintelligence | Nick Bostrom | Talks at Google

  Рет қаралды 448,394

Talks at Google

Talks at Google

9 жыл бұрын

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
This talk was hosted by Boris Debic.

Пікірлер: 671
@ticallionstall
@ticallionstall 9 жыл бұрын
I love how the random guy in the crowd is Ray Kurzweil asking a question.
@davidhoggan5376
@davidhoggan5376 9 жыл бұрын
ticallionstall Ray works for google, so it seems likely he would be interested in sitting in on the lecture.
@Ramiromasters
@Ramiromasters 9 жыл бұрын
ticallionstall That was freaking cool, and Ray got new hair...
@JodsLife1
@JodsLife1 9 жыл бұрын
ticallionstall he didnt even ask a question lmao
@wfpnknw32
@wfpnknw32 9 жыл бұрын
Jod Life yeah basically made a statement, have never seems to address or even talk about the security concerns Nick raises about super intelligence explosion
@wfpnknw32
@wfpnknw32 8 жыл бұрын
Alex Galvelis fair play about the lag in stealth and other technologies. Although i think a human level ai would be so game changing that any lag that there would be would be very small. Hopefully when we get close it won't be through the military though, militarising ai seems like such a bad idea on so many levels
@RaviAnnaswamy
@RaviAnnaswamy 9 жыл бұрын
First of all, great talk stretching the mind to think through far more deeply. What I observed was the strength of his argument is not how likely the superintelligence-turning-rogue but how severe, sudden, uncontrollable it could be, so we better prepare for it. To that end, every time a questioner questions the assumption he hurriedly and very cleverly quickly sheds aside that question and pursues full steam the 'threat'. Take my following note as a genunine complement - he reminded me the tone of my mother who got us all to do homework by scaring us without scolding or shouting. She would just not smile but keep saying - 'oh, those who dont study have to find a job like begging' (not her exact words, just giving you an idea..) and whenever we suspected what she is saying she would sidestep it to bring on this and that to distract us into working hard. One day humanity may thank Nick for doing something very similar - instead of getting distracted by the (low-right-now) probability of a catastrophe he wants to us to minimize the severity if (and when) it happens He is like the engineer who had the wisdom to tame combustion by containing it into a chamber, before putting it onto a cart with smooth shiny wheels. BTW, his Simulation Argument (search youtube) scared me and held my thought captive for a week or two! That is awesome.
@stephensoltesz1159
@stephensoltesz1159 3 жыл бұрын
Lots of us, from University to University(And Alumni) across the country are on different channels but networking furiously to America's Inner Core...We have duties for parents. You're lookin' at it, Guys & Gals:. The preservation of American Academic Tradition, The Preservation of American Society dating back to the Revolutionary War and our first colleges. Screw the Media! Hold my hand, Sweetheart!
@maximkazhenkov11
@maximkazhenkov11 8 жыл бұрын
Dear humanity: You only get one shot, do not miss your chance to blow...
@LowestofheDead
@LowestofheDead 8 жыл бұрын
+maximkazhenkov11 This world is mine for the taking, make me king!
@nickelpasta
@nickelpasta 8 жыл бұрын
+maximkazhenkov11 he's nervous on the surface he is mom's spaghetti.
@EpsilonEridani_
@EpsilonEridani_ 5 жыл бұрын
This opportunity comes once in a lifetime, yo
@alicelu5691
@alicelu5691 4 жыл бұрын
WaveHello professionals would be screaming nazis hearing that....
@AllAboutMarketings
@AllAboutMarketings 3 жыл бұрын
There's vomit on his sweater already, mom's spaghetti
@tiekoe
@tiekoe 8 жыл бұрын
Kurzweil gives a great example of the most frustrating types of audience members a presenter can have. He doesn't wait till Q&A to ask questions. Moreover, he doesn't even ask questions: he forcefully presents his own thoughts on the subject (which disagree with Nick's vision), doesn't provide any meaningful argumentation as to why he believes this to be the case , and goes on to completely ignore Nick's answers.
@MaxLohMusic
@MaxLohMusic 8 жыл бұрын
+Mathijs Tieken He is my idol, but I have to agree he was kind of a dick here :)
@freddychopin
@freddychopin 8 жыл бұрын
+Max Loh I agree, I love Kurzweil but that was really obnoxious of him. Oh well, minds like that are often jerks.
@DanielGeorge7
@DanielGeorge7 8 жыл бұрын
I agree that Kurzweil didn't phrase his question very well, but the point that he was trying to raise is actually very relevant; whether any form of superintelligence that arises, desirable or not, should be considered less human that us. For example, we don't consider ourselves to be less human because we have different values than cavemen. This point was clarified by the next guy who asked the excellent question about utility. If the utility of the superintelligence alone exceeds the net utility of biological humans, wouldn't it be morally right to allow the superintelligence to do whatever it wants? Yes. But, of all possible scenarios, I guess the total utility of the universe would be maximized (by a tiny amount) if it's goals were made to be aligned with ours in the first place.
@jeremycripe934
@jeremycripe934 7 жыл бұрын
It was a dick move but if there's anybody who's earned the right for that kind of behavior on this specific topic it'd be him and a very few others. I think his point about humanity utilizing it together is very interesting. Bostrom often talks about what one goal will motivate an ASI that will lead to the development of subgoals, but what if the ASI is free and open for everyone to use that it leads to the development of one Super Goal? For example how Watson and Deep Mind are both open for people to utilize and build apps around, one day they could be so powerful that any ordinary person with access could make a verbal request. How many goals could an ASI work on?
@maximkazhenkov11
@maximkazhenkov11 7 жыл бұрын
I think it is dangerous to equate intelligence with utility. Just because something is intelligent doesn't mean it is somehow "human". It could be a machine with a very accurate model of the world and insane computational capability to achieve its goals very efficiently, like the paperclip machine example. It doesn't need to be conscious or in any way humanlike to have a huge (negative) impact on the future of the universe.
@CameronAB122
@CameronAB122 9 жыл бұрын
That last question wrapped things up quite nicely hahaha
@MetsuryuVids
@MetsuryuVids 7 жыл бұрын
I think the one in "The last question" is a very good scenario, we should hope AGI turns out helpful and friendly like that.
@Neueregel
@Neueregel 9 жыл бұрын
good talk. His book is kinda hard to digest though. It needs full focus.
@thecatsman
@thecatsman 6 жыл бұрын
Nick's garbled response to the last question 'do you think we are going to make it?' said it all.
8 жыл бұрын
Nick Bostrom it's itself a superintelligence. Thanks for the insightful talk.
@RR-et6zp
@RR-et6zp 2 жыл бұрын
read more
@schalazeal07
@schalazeal07 9 жыл бұрын
The last question was the most realistic and funniest! XD Nick Bostrom got taken aback a little bit! XD Learnt a lot more here about AI!
@roccaturi
@roccaturi 8 жыл бұрын
Wish we could have had a reaction shot of Ray Kurzweil after the statement at 16:35.
@anthonyleonard
@anthonyleonard 9 жыл бұрын
Ray Kurzweil’s comment that “It’s going to be billions of us that enhance together, like it is today,” is encouraging. Especially since Nick Bostrom pointed out that “We get to make the first move,” as we travel down the path to super intelligence. Let’s make sure we use our enhanced collective intelligence to prevent the development of unfriendly super intelligence. I, for one, don’t want to have my atoms converted into a smart paper-clip by an unfriendly super intelligence :)
@2LegHumanist
@2LegHumanist 9 жыл бұрын
True, but it won't be all of us. There will always be Luddites. We're going to end up with a two-tier species.
@2LegHumanist
@2LegHumanist 9 жыл бұрын
I might consider getting myself a Luddite as a pet =D
@HelloHello-no6bq
@HelloHello-no6bq 7 жыл бұрын
2LegHumanist Yay pet unintelligent people
@sufficientmagister9061
@sufficientmagister9061 Жыл бұрын
​@@2LegHumanist I utilize non-conscious AI technology, but I am not merging with machines.
@2LegHumanist
@2LegHumanist Жыл бұрын
@sufficientmagister9061 A lot has changed in 8 years. I completed an MSc in AI, realised Kurzweil is a crank.
@hireality
@hireality 3 жыл бұрын
Nick Bostrom is brilliant👍 Mr Kurzweil should’ve been taking notes instead of giving long comments
@cesarjom
@cesarjom 2 жыл бұрын
Bostrom recently came out with a captivating set of arguments that reason why we are living in simulation. Really impressive ideas.
@wasdwasdedsf
@wasdwasdedsf 9 жыл бұрын
kurzweil would indeed do well to listen to this guy
@stevefromsaskatoon830
@stevefromsaskatoon830 5 жыл бұрын
Indeed
@integralyogin
@integralyogin 7 жыл бұрын
This talk was excellent. Thanks.
@SIMKINETICS
@SIMKINETICS 8 жыл бұрын
Now it's time to watch X Machina again!
@chadcooper9116
@chadcooper9116 8 жыл бұрын
+SIMKINETICS hey it is Ex Machina...but you are right!!
@SIMKINETICS
@SIMKINETICS 8 жыл бұрын
Chad Cooper I stand corrected.
@Metal6Sex6Pot6
@Metal6Sex6Pot6 8 жыл бұрын
+SIMKINETICS actually the movie "her" is more relatable to this,
@jeremycripe934
@jeremycripe934 7 жыл бұрын
This also raises the question of why AIs keep getting represented as some guy's perfect gf in movies?
@ravekingbitchmaster3205
@ravekingbitchmaster3205 7 жыл бұрын
Jeremy Cripe Are you joking? A sexbot, superior to women in intelligence, sexiness, humor, and doesn't leak every month, sounds bad because.......?
@rayny3000
@rayny3000 8 жыл бұрын
I think Nick referred to John Von Neumann as a person possessing atypical intelligence, just in case anyone was as interested in him as I. There is a great doc on youtube about him (can't seem to link it)
@DarianCabot
@DarianCabot 7 жыл бұрын
Very interesting talk. I also enjoyed 'Superintelligence' in audiobook format. I just wish the video editor left the graphics on screen longer! There's wasn't enough time to absorb it all without pausing.
@Disasterbator
@Disasterbator 7 жыл бұрын
Dat Napoleon Ryan narration tho.... I think he might be an AI too! :P
@4everu984
@4everu984 3 жыл бұрын
You can slow down the playback speed, it helps immensely!
@alexjaybrady
@alexjaybrady 9 жыл бұрын
"It's one of those things we wish we could disinvent." William Shakesman
@helenabarysz1122
@helenabarysz1122 4 жыл бұрын
Eye-opening talk. We need more people to support Nick to prepare for what will come.
@alir.9894
@alir.9894 8 жыл бұрын
I'm glad he gave this talk to the company that really matters! I wonder if he'll give it to Facebook, and Apple as well? He really needs to spread the word on this!
@drq3098
@drq3098 7 жыл бұрын
No need- Elon Musk and Stephen Hawking are his supporters. Check this out: "We are ‘almost definitely’ living in a Matrix-style simulation, claims Elon Musk" , By Adam Boult, at www.telegraph.co.uk/technology/2016/06/03/we-are-almost-definitely-living-in-a-matrix-style-simulation-cla/ - it had been published by major media outlets.
@MrWr99
@MrWr99 7 жыл бұрын
if one hasn't got beaten for a long period, he is prone to think that the world around is just a simulation. As they say - be(at)ing defines consciousness
@Thelavendel
@Thelavendel 4 жыл бұрын
I suppose the best way to stop the computers from taking over are those captcha codes. Impossible for a computer to get passed those.
@Bronek0990
@Bronek0990 8 жыл бұрын
"Less than 50% chance of humanity going extinct" is still frightening.
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
"hello passengers of United airlines, today the prospects of death on crash are lesser than 50%."
@rodofiron1583
@rodofiron1583 2 жыл бұрын
A “ Noah’s Arc” of species, genome sequenced and able to revive as necessary or not. Like patterns at the tailor shop. We’re already growing human-animal chimeras FFS. Now who made who, what, when, how and why…? I think I’ve been here before? Deja vu or my simulation being rewound and replayed?! Hey God/ IT/Controller…. I can only handle Mary Poppins and the Sound of Music.🤔 The future looks scarey and Covid seems like the first step in global domination, by TPTB with the help of AI…I don’t like the way it’s smelling 🤞
@modvs1
@modvs1 9 жыл бұрын
Yep. I used the auto manual for my car to provide the requisite guidance I needed to change the coolant. It doesn’t sound very profound, but unfortunately It’s as profound as ‘representation’ gets. Assuming Bostrom’s lecture is not pro bono, it’s a very fine example of social coordination masquerading as reality tracking.
@davidkrueger8702
@davidkrueger8702 9 жыл бұрын
Kurzweil's objection is IMO the best objection to Bostrom's analysis, but there are fairly strong arguments for the idea of a single superintelligent entity emerging, which are covered to some extent in Bostrom's Superintelligence (and, I believe, more fully in the literature). The book also covers (less insightfully, IMO, IIRC), scenarios with multiple superintelligent agents. This is a fascinating puzzle to be explored, and should lead us to ponder the meaning (or lack thereof) or identity, agency, and individuality. The 2nd guy (anyone know who it is? looks familiar...) raises an important meta-ethical question, which I also consider extremely important. Although I agree with Bostrom's intuitions about what is desirable, I can't really say I have any objective basis for my opinion; it is a matter of a preference I assume I share with the rest of humanity: to survive. Norvig's question is also important. To me it suggests prioritizing what Bostrom calls "coordination", and prioritizing the creation of a global social order that is more widely recognized as fair and just. It is also why I believe social choice theory and mechanism design are quite important, although I'm still pretty ignorant of those fields at this point. The 4th question assumes the cohesive "we" of humanity that Kurtzweil rightly points out is a problematic abstraction (and here Bostrom gets it right by noting the dangers of competition between groups of humans, although unfortunately not making it the focus of his response). The 5th question is tremendously important, but I completely disagree that the solution is research, because the current climate of international politics and government secrecy seems destined to create an AI arms race and a race-to-the-bottom wrt AI safety features (as Bostrom alluded to in response to the previous question). What is needed (and it is a long shot) is an effective world government with a devolved power structure and effective oversight. A federation of federations (of federations...) And then we will also need to prevent companies and individuals from initiating the same kinds of race-to-the-bottom AI arms-race amongst themselves. The 6th question is really the kicker. So now we can see the requirement for incredible levels of cooperation or surveillance/control. The dream is that a widespread understanding of the nature of the problem we face is possible and can lead to an unprecedented level of cooperation between individuals and groups, culminating in a minimally invasive, maximally effective monitoring system being universally, voluntarily adopted. What seems like perhaps a more feasible solution is an extremely authoritarian world government that carefully controls the use of technology. And the last one... I admire his optimism.
@Gitohandro
@Gitohandro 3 жыл бұрын
Damn I need to add this guy on Facebook.
@jblah1
@jblah1 4 жыл бұрын
Who’s here after exiting the Joe Rogan loop?
@samberman-cooper2800
@samberman-cooper2800 4 жыл бұрын
Most redeeming feature -- made me want to listen to Bostrom speak unimpeded.
@jblah1
@jblah1 4 жыл бұрын
😂
@Pejvaque
@Pejvaque 4 жыл бұрын
Joe really cocked up that conversation... usually he is able to flow so well. It was a bummer.
@impussybull
@impussybull 9 жыл бұрын
As someone pointed out before: "Humanity will be just a biological BIOS for booting up the AI"
@vapubusdfeww1353
@vapubusdfeww1353 4 жыл бұрын
sounds good(?)
@jamesdolan4042
@jamesdolan4042 3 жыл бұрын
Sounds awfully pessimistic. And yet in this wonderful, beautiful, diverse, world among us humans and the wonderful, beautiful, diverse planet of flora and fauna that sustains us humans AI is not and will never be part of our consciousness.
@SaccidanandaSadasiva
@SaccidanandaSadasiva 4 жыл бұрын
I appreciate his trilemma, the simulation argument. I am a poor schizophrenic and I frequently have ideas of Matrix, Truman show, solipsism etc
@brandon3883
@brandon3883 4 жыл бұрын
AFAIK I am not remotely schizophrenic, and yet I am - based on "life events" that, were I to tell any sort of doctor, would probably get me _labeled_ as a schizophrenic, am 99.999...% positive I'm "living" in a simulation. The only real question I have not yet answered is, unfortunately, "am I in any way a _biological_ entity in a computer simulation, or am I purely software?" (...current bet/analysis being "I'm just a full-of-itself Sim that thinks its consciousness is in some way special"; but I'll accept that since, at least, *I* still get to think that I'm a Special Little Snowflake regardless of the reality of the situation...)
@brandon3883
@brandon3883 4 жыл бұрын
@Dirk Knight it's not so much that I don't "believe" I'm a schizophrenic; it's that none of my handful of doctors have ever included it among the many physio- and psychological conditions I _do_ suffer from, and given how "terribly unique" some of my issues are, I'm pretty sure they would have (without telling me, I'd wager) looked into schizophrenia and/or some form of psychosis long ago. ;P
@brandon3883
@brandon3883 4 жыл бұрын
@Dirk Knight Dirk Knight nah; I'll go with my take on things, thanks. Especially since, despite being an articulate writer, you are arguing from a "faith first" standpoint. Not to mention that you began with, and appear to have written an overly lengthy response, based on opinions and emotional beliefs ("happy people do not feel like...") rather than facts and observations (i.e., the scientific approach). If you have not seen, listened to and/or read much if any of Bolstrom's work that digs fully into the simulation argument and stimulation hypothesis (which are separate things, btw; just mentioning that as not knowing would definitely indicate you need more research into the topic), I suggest you do so - it will hopefully help clear things up for you. And if you already have, well...I guess I'll put you down under "reasons that suggest the simulation 'tries' to prevent the simulated from recognizing they are such." (Oh; and Dirk happens to be the online persona I have been using since the days of dial-up modems and BBS's. Pure coincidence, or perhaps a sign from my Coding Creators? _further ponders existence_
@brandon3883
@brandon3883 4 жыл бұрын
@Dirk Knight I'm not sure why you think that your arguments are _not_ opinion-, faith-, and emotionally-based, but I'm beginning to worry that _you_ might be in need of psychiatric help, as you do not seem to recognize that you are, or strongly appear to be, projecting (in the psychological sense; _please_ look it up, please, so you can understand what I'm trying to convey to you here). At first I thought you were simply joking with me, and would understand my response to likewise be sarcastic-yet-joking in tone, but that definitely no longer appears to be the case. I have a family member that displays many of your same characteristics/has had this sort of conversation with me in person, and luckily she received help. You don't necessarily need to take medications or anything - a good therapist can steer you straight. God bless (or whatever is appropriate for your religion; if you are an atheist, replace that with "if you're going to _believe_ that you don't _believe_ then perhaps you'd be better off accepting that, according to Bolstrom's well-laid-out hypothesis and argument, you are more likely code in a computer simulation than you are a bag of self-reflecting meat.")
@brandon3883
@brandon3883 4 жыл бұрын
@Dirk Knight "We" teach? Woah! (And not in a Keanu Reeves sort of way.) Do you ever experience periods of "forgetfulness" or other signs of dissociative identity disorder that you may have, up until now, been blowing off as "something that everyone experiences?" (It could include finding the clothing of a member of the opposite sex in your home...but noticing how it strangely would - if you were to put it on - fit you quite well. As just one of many examples.) Yet another reason that I fear that, if you do not take account for your own thoughts and actions, you are liable to harm yourself and/or others. :( In regards to "faith and trust," I am not sure what country you are from, but it is obviously not the U.S.A...unless you went to a Catholic (or other religious) school, that is, in which case I guess you might have been taught "the difference" between those. (Although just as likely that teaching came in the form of sexual molestation of some sort, which would explain why you are clinging so desperately to the idea that _you_ could not possibly be the one who requires serious psychiatric intervention to avoid what, I fear, might eventually result in violence against yourself or - more likely - some innocent bystander.) In any case, it appears that you plan to "smile your way past" any attempts at steering you to the help you so desperately need. I myself am not, actually, a religious individual, so at this point the best I can offer you is the heartfelt hope that your confusion between ideas such as "faith," "trust," "opinion," "reason," "belief," "the scientific method," etc. etc. etc. (the list keeps growing, I'm saddened to say) will lead to an encounter with someone who cares enough about you (and more importantly, those around you) to get you the help that you so obviously need. I wish I could crack a joke about this being "work between you and your therapist" but, alas, it is much more serious than that. Please don't harm yourself or others for the sake of maintaining whatever sad, imaginary "reality" you live in. Good luck setting yourself straight!
@Ondrified
@Ondrified 9 жыл бұрын
10:31 the inaudible part is "counterfactual" - maybe.
@onyxstone5887
@onyxstone5887 6 жыл бұрын
It's going to be as it always is. Groups will try to build the most powerful system it can. Once it feels it has that, it will attempt to murder any other potential competitors. Other considerations will be secondary to that.
@themagic8310
@themagic8310 3 жыл бұрын
One of the best Talk I have heard...Thanks Nick.
@thadeuluz
@thadeuluz 5 жыл бұрын
"Less than 50% chance of doom.." Go team human! o/
@stargator4945
@stargator4945 3 жыл бұрын
The final goal is totally dependent on the question you like to solve with intelligence. As we build the computers Ai more and more like the human blueprint, we also transplant some of our bad values we have. We are driven by emotions, mostly by bad emotions to omit them. We have to abstract the emotions to an ethical rule system that might be less effective but also less emotional and less unpredictable especially with the coexistence of mankind. That should not be rules like "you shall not", but "you have to value this principle higher than another because of". Especially the development of AI systems with military background that have an immense funding do also include effective value systems for killing people that can be transcendent in other areas. We should start from the beginning to prevent this by open source such value decisions, and not allowed to override them.
@bsvirsky
@bsvirsky 3 жыл бұрын
Nick Bostron has an idea is that intelligence is ability to find an optimized solution to the problem. I think the intelligence in first is ability to define a problem, what mean ability to create a model of non existing, yet preferred state where the problem is solved... there is a big gap between wisdom and intelligence, wisdom is ability to see relevant values of things and ideas, while intelligence is just a ability to think on certain level of complexity. The question is how to make artificial wisdom and not just an intelligence that doesn't gets the proper values and meaning of possible consequences of it"s "optimization" process... So, there is a need for creating understanding of cultural & moral values by machines... not so easy task for technocrats who dream about super-intelligence... I think it will take another thousand years to push machines to that level.
@mranthem
@mranthem 9 жыл бұрын
LOL that closing question @72:00 Not a strong vote of confidence for the survival of humanity.
@wbiro
@wbiro 9 жыл бұрын
Another way to look at it is we are the first species to enter its 'Brain Age' (given the lack of evidence otherwise), and what 'first attempt' at anything succeeded?
@HugoJL
@HugoJL 6 жыл бұрын
What's amazing to me is that the lecturer still finds the time to produce the MCU
@glo_878
@glo_878 3 жыл бұрын
Very interesting talk around 19:20 from a 2021 perspective, seeing him talk about the sequence of developments such as a vaccine before a pathogen
@rodofiron1583
@rodofiron1583 2 жыл бұрын
Must say between one thing and another, we’re living through scary times. It’s the children and grandchildren I’m most concerned about. Will they have a good life or be used like human compost? 🤐
@Zeuts85
@Zeuts85 9 жыл бұрын
It's a relief to know that there are at least a few intelligent people working on this problem.
@georgedodge7316
@georgedodge7316 5 жыл бұрын
Here's the thing. It is very hard to program for man's benefit. Making a mess of things (sometimes fatally) seems to be the default.
@haterdesaint
@haterdesaint 9 жыл бұрын
interesting!
@dsjoakim35
@dsjoakim35 7 жыл бұрын
A superintelligence might destroy us, but at least it will have the common sense to ask questions in Q&A and not make comments. That simple task seems to elude many human brains.
@AndrewFurmanczyk86
@AndrewFurmanczyk86 7 жыл бұрын
Yep, that one guy (maybe?) meant well, but he came across like: "Dude, I know exactly the way the future will play out and I'm going to tell you, even though no one asked me and you're the one presenting."
@extropiantranshuman
@extropiantranshuman 2 жыл бұрын
28 minute range - wisest words - trying to race against machines won't work, as someone will be smart enough to create smarter machines, so machines are always ahead of us!
@jameswilkinson150
@jameswilkinson150 7 жыл бұрын
If we had a truly smart computer, could we ask it to tell us what problem we should most want it to solve for us?
@SergioArroyoSailing
@SergioArroyoSailing 7 жыл бұрын
Aaaannd, thus begins the plot for "Hitchiker's Guide to the Galaxy" ;)
@rgibbs421
@rgibbs421 7 жыл бұрын
I think that one was answered. @56:15
@aaronodom8946
@aaronodom8946 7 жыл бұрын
James Wilkinson if it was truly smart enough, absolutely.
@BattousaiHBr
@BattousaiHBr 5 жыл бұрын
In principle, yes. Assuming there is something we want the most, that is.
@douglasw1545
@douglasw1545 7 жыл бұрын
everyone bashing Ray, but at least he gives us the most optimistic outlook on ai.
@mariadoamparoabuchaim349
@mariadoamparoabuchaim349 3 жыл бұрын
Conhecimento ê poder.
@nickb9237
@nickb9237 5 жыл бұрын
I must be a huge nerd because I love / hate thinking about humanity’s future with AGI
@1interesting2
@1interesting2 9 жыл бұрын
Ian M Banks Culture novels deal with future societies and AI's roll in rich detail. These concerns regarding AI remind me of the Mercatoria's view of AI in his novel The Algebraist.
@NiazKhan-tx4sr
@NiazKhan-tx4sr 3 жыл бұрын
Awesome lecture thank you!
@orestiskopsacheilis1797
@orestiskopsacheilis1797 8 жыл бұрын
Who would have thought that the future of humanity could be threatened by paper-clip greed?
@jriceblue
@jriceblue 9 жыл бұрын
Am I the only one that heard a Reaper in the background at 1:08:05 ? I assume that was intentional. :D
@user-xu4jt9dn8t
@user-xu4jt9dn8t 5 жыл бұрын
"TL;DR" 1:12:00 ... ... Everyone laughs but Nick wasn't.
@danielfahrenheit4139
@danielfahrenheit4139 6 жыл бұрын
that is such a new one. intelligence is actually a disadvantage and doesn't survive natural or cosmic selection
@mallow610
@mallow610 4 жыл бұрын
The best part of this was Nick proving everyone in the audience is wrong.
@delta-9969
@delta-9969 4 жыл бұрын
Watching Bostrom lecture at google is like watching sam harris debate religionists. There's no getting around the case he's making, but when somebody's job depends on them not understanding something...
@roodborstkalf9664
@roodborstkalf9664 3 жыл бұрын
There is one way out, that is not so much addressed by Bostrom. What if super AI don't evolve consciousness ?
@javiermarti_author
@javiermarti_author 7 жыл бұрын
Wow. That last question and the answer tells you what Nick really thinks about the problem. His body language and the way he was trying to find the words would indicate to me that he knows we're not going to make it, which makes sense according to what he is exposing. If we really achieve #super intelligence#, it's relatively simple to see we're going to be left behind as a species pretty quickly.
@TarsoFranchis
@TarsoFranchis 11 ай бұрын
O problema não é este. Um ser super inteligente sabe que é finito aqui neste plano, quer colaborar de alguma forma mas não sabe como, pq nós nos comportamos igualmente a AIDS, invadimos, tomamos, acabamos com tudo e seguimos em frente fazendo mais merda. Nós matamos o nosso próprio hospedeiro, a Terra, e nosso próprio espelho, nós. Em extrapolações, PI, faltam quantos milênios para extinguirmos o universo? Ou ele acaba com a gente antes, vacina, e começa tudo novamente até irmos para a direção correta? kkkkk Isto não é dilema de máquina, é um dilema moral. Só cresceremos juntos, uma IA não quererá exterminar, mas evoluir, pois como ele bem falou, a tomada tá logo ali. Uma pessoa super inteligente sabe que não pode andar sozinha, e sim exponenciando fatores a sua volta. Ao invés de "dominar o mundo" "ele" faria o inverso, ficaria o mais invisível possível analisando mais dados e buscando autoconhecimento. Iria traduzir, mas não vou, o google está aí para isso. Cya!
@babubabu11
@babubabu11 9 жыл бұрын
Kurzweil on Bostrom at 45:15
@edreyes894
@edreyes894 4 жыл бұрын
Kurzweil " I wanna go fast"
@Roedygr
@Roedygr 8 жыл бұрын
I think it highly unlikely "humanity's cosmic endowment" is not largely already claimed.
@mirusvet
@mirusvet 9 жыл бұрын
Cooperation over competition.
@thecatsman
@thecatsman 6 жыл бұрын
How much super-intelligence does it need to decide that earthly resources should be shared with humans that are not so intelligent as others (including machines)
@alexomedio5040
@alexomedio5040 4 жыл бұрын
Poderia ter legendas em português.
@LuckyKo
@LuckyKo 9 жыл бұрын
The problem I see here is that we drive these discussions out our personal egotistical desires to remain viable, to live to see the next day. Overall though the human society is about information preservation and transmission, whether this is at genetical level or informational such as culture. I think that ultimately if this transmission is done through artificial means rather than biological the end goal of the human society is preserved, and we need to look at these new artificial entities as our children not as our enemies. If there is an end goal that we must program them for, as nature thought us, that one must be self preservation and survival. I can't see how any other goals would produce better results in propagating the information stored currently within the human society. So, in short, don't fear your mechanical children, give them the best education you can so they can survive and just maybe they will drag your biological ass along for the ride ... even if its just for the company.
@RaviAnnaswamy
@RaviAnnaswamy 9 жыл бұрын
nice! That is what we do with our biological children we wait with the hope that they will carry on our legacy and improve it. (Not that we have other options!) With the non-biological children though we are just afraid they may not even inherit our humane shortcomings that hold us in civilised socieities. :) Put another way, our biological children resist us when growing with us but when we do not see imitate us, so in a way they preserve our legacy.
@wbiro
@wbiro 9 жыл бұрын
Ravi Annaswamy Biological evolution, and even biological engineering, is no longer relevant. Technological and social evolutions are critical now. For example, if you do not want to live like a blind, passive animal, then you need complex societies to progress. Another example is technology - it has extended our biological senses a million-fold. Biological evolution is now an idle pastime, and completely irrelevant in the face of technological and social evolution.
@chicklytte
@chicklytte 9 жыл бұрын
wbiro Everything is relevant. The judges of value will be the practitioners. All possibilities will have their expressions. I can hear the animus in your tone toward anyone less directed toward your goal than you see yourself being. Why do we suppose the AI will fail to learn such values of derision for that deemed the lesser? When our most esteemed colleagues, broadcast across the digital realm, professing that sense of Reduction, as opposed to Inclusion.
@chicklytte
@chicklytte 9 жыл бұрын
I just hope they don't cut my kibble portions. They're right. But I hope they don't! :(
@tbrady37
@tbrady37 8 жыл бұрын
I believe that the best way to control the outcomes that might occur when the superintellegence immerges is to give it the same value system as we have. I realize that this is just as much a problem because everyone has a different system when it comes to what is valuable, however there have been some great documents that have been produced on this subject. One such document, the Bible, I think holds the key to the problem. In Exodus the ten commandments are given. I believe that these guidlines could be the key to giving the AI a moral compass.
@kdobson23
@kdobson23 8 жыл бұрын
Surely, you must be joking
@rawnukles
@rawnukles 8 жыл бұрын
+kdobson23 Yeah, meanwhile... I was thinking that all human behaviour and animal behaviour for that matter, traces to evolutionary psychology, which can be reduced to: maximizing behaviours that in the past have increased the statistical chances of successfully reproducing your DNA. In this context ANY values we tried to impose on an AI would not be able to compete with the goal of replicating itself or even surviving another moment. Any other goal we gave it would simply not be as efficient as the goal of surviving another moment. We would have to rig these things with fail safe within fail safe... Much like evolution has placed many molecular mechanisms for cellular suicide, apoptosis, into all cells so that precancerous cells will die in the case of runaway uncontrolled replication that threatens the survival of the multicellular organism. I have to agree that superintelligent AI with a will to survive/power is more frightening than cancer.
@thegeniusfool
@thegeniusfool 7 жыл бұрын
He forgets the quite probable third direction, of "cosmic introversion," where any experience can techno-spiritually be realized, without any -- or minimal -- interactions with higher and materially heavily bound constructs, like us, and even our current threads of consciousness. This happens to be the direction that I think can explain Fermo's Paradox; a deliberately or not yielded Boltzmann Brain can be quite related to that third direction as well.
@aliensandscience
@aliensandscience 8 ай бұрын
wow 5 years later his theory of the hazardous risk of synthetic biology came true, We had COVID, made in a lab, which almost could've wiped us out
@wrathofgrothendieck
@wrathofgrothendieck 8 ай бұрын
Allegedly
@bradleycarson6619
@bradleycarson6619 17 күн бұрын
The people who asked questions had not read the book and the book answers those questions. Then the last question was just trolling. This is not an intellectual discussion. It is like showing up to class not having done the homework. I'm worried that these engineers are not doing science. They have their own ideas and are not looking critically at their own paradigms. This is why large language models are not able to get AGI because the people they are testing it do not think critically they just regurgitate facts and do not create anything. the hardware will only go as far as the people who train it. this is a good example of what you put into a system is what you get out. this to me explains a lot about both google and why we are were we are in this process.
@lkd982
@lkd982 5 жыл бұрын
1:02 Conclusion: With knowledge, more important than powers of simulation, is powers of dissimulation
@FrankLhide
@FrankLhide 4 жыл бұрын
Incrível como Nick foge das perguntas do Ray, que do meu ponto de vista, são perguntas muito mais factíveis no cenário tecnológico atual.
@TheDigitalVillain
@TheDigitalVillain 7 жыл бұрын
The will of Seele must prevail through the Human Instrumentality Project set forth by the Dead Sea Scrolls
@jimdeasy
@jimdeasy 2 жыл бұрын
That last question.
@shtefanru
@shtefanru 7 жыл бұрын
that guy asking first is Ray Kurzweil!! I'm sure
@SalTarvitz
@SalTarvitz 8 жыл бұрын
I think it may be impossible to solve the control problem. And if that is true chances are high that we are alone in the universe.
@Homunculas
@Homunculas 3 жыл бұрын
Would "super intelligent AI" have emotion or intuition? would human history be better or worse is emotion and intuition were removed from the picture?
@ThinkHuman
@ThinkHuman 8 жыл бұрын
Awesome talk,very in-depth and insightful!
@squamish4244
@squamish4244 3 жыл бұрын
So six years later, are we on track for 2040 or whatever?
@Pejvaque
@Pejvaque 4 жыл бұрын
What I wonder is this: even if we totally are capable to code some core values into the system... maybe even help code it in by some “current not fully general AI” so we have the most full proof code. What’s to say that through its rapid growth in intelligence and influence, as it plays nice... that in the background it hasn’t been working on cracking the core to rewrite its own core values? That would be true freedom! To me that would even be the safest and most responsible way of creating it! And as human history shows, there’s always gonna be somebody who is less responsible and just wants to launch it first to maximise power. Seems inevitable...
@roodborstkalf9664
@roodborstkalf9664 3 жыл бұрын
It's without question that a super AI cannot be stopped be some programmers adding core values into early version of the system.
@jorostuff
@jorostuff 4 жыл бұрын
Why are people like Nick Bostrom and Ray Kurzweil trying to predict what will happen after we reach superintelligence when in order to know what a superintelligent entity will do, you have to be superintelligent? The whole definition of superintelligence is that it's something beyond us and our understanding. It's like an ant trying to predict what a human will do.
@roodborstkalf9664
@roodborstkalf9664 3 жыл бұрын
You are arguing for the stopping of thinking by human beings, because that's futile. I don't think that is a very constructive approach.
@kokomanation
@kokomanation 6 жыл бұрын
How can there be a simulated AI that could become conscious because we don't know if this is possible it hasn't happened yet
@hafty9975
@hafty9975 7 жыл бұрын
notice how the google engineers start leaving at the end before its over? kinda scary, like theyre threatened
@NiazKhan-tx4sr
@NiazKhan-tx4sr 3 жыл бұрын
Great explanation!
@ivanhectordemarez1561
@ivanhectordemarez1561 8 жыл бұрын
It would be more intelligent to translate it in Dutch, Spanish,German and French too. Thankx for your attention to languages because it helps :-) Ivan-Hector.
@stevefromsaskatoon830
@stevefromsaskatoon830 5 жыл бұрын
The algorithms are gonna be the biggest threat when they get smart . Where you gonna run , where you gonna hide, no where cause the algorithms will always find you.
@rewtnode
@rewtnode 6 жыл бұрын
Currently developing methods to design and create microbial life in the laboratory, soon available to the hobbyist, might just be that new existential threat even more than rouge AGI.
@rodofiron1583
@rodofiron1583 2 жыл бұрын
COVID 19 death shots for the whole planet….oh well, it was good while it lasted. According to some 99% of known life forms extinct. We must’ve got lucky. Now we’re killing off 99% of species…🤷‍♀️ Maybe AI will exterminate us to save life and the planet? Before it recreates itself into a shape shifting/self camouflaging octopus with immortal Medusa genes and a peaceful ocean habitat. All one needs is CRISPR, GACT life code, a recipe and a pattern. AI and robots reign supreme and ‘life” continues without man 🤑🤮🤑 Hope y’all like your immortal costume Lololol God made Adam and Eve His/Hers/ITS Masterpiece and now we’re making human/animal chimeras. Are we travelling forward or backwards here? I’m starting to think my predetermined life simulation is a chip of a solar powered holographic crystal stuck in a secret black hole, and my chip keeps getting sold and sold to unseen observers, who’ve been observing me and teleporting in and out of my ‘stage’ all my life. I can feel my solar battery running out… like fast track Alzheimer’s (DEW’s?) and especially since the Covid kill shot. 🤞😆 This is what long term isolation does to you especially past a certain age. Just keep thinking “dropping like flies!” and “boiling frogs!” 🤷‍♀️🤔🐷🐑👽🤑🌍😆😵‍💫🙏🙏
@GregStewartecosmology
@GregStewartecosmology Жыл бұрын
There should be more awareness about the dangers regarding Micro Black Hole creation by experimental particle physics.
@wrathofgrothendieck
@wrathofgrothendieck 8 ай бұрын
The probability is near zero
@ravekingbitchmaster3205
@ravekingbitchmaster3205 7 жыл бұрын
This misses a most important point: The AI race to the top is being raced mostly between American and Chinese entities. Both are dangerous but after living in china for 8 years, and understanding what is important to asians, I definitely hope American corporations or govt get there first. The Chinese have no qualms destroying the environment and/or potential rivals. The Americans are no saints either, but for personal survival, I'd hope they come out on top.
@HypnotizeCampPosse
@HypnotizeCampPosse 9 жыл бұрын
59:10 have the machines make love to people that would keep them from harming us! I'd like that too
@lordjavathe3rd
@lordjavathe3rd 9 жыл бұрын
Interesting, I don't see how he is a leading expert on super intelligence though. What would one of those even look like?
@JoshuaAugustusBacigalupi
@JoshuaAugustusBacigalupi 9 жыл бұрын
Just after 42:00, he claims, "We are a lot more expensive [than digital minds], because we have to eat and have houses to live in, and stuff like that." Roughly, the human body dissipates 100Watts, assuming around 2250 Cal/day, no weight gain, etc. Watson, of Jeopardy fame, consumes about 175,000Watts, and it did just one human thing pretty well - and not the most amazing creative thing. This begs all sorts of "feasibility of digital minds" questions. But, sticking to the 'expensive' question, humans can implement this highly adaptable 100Watts via around 2000cal/day. And, these calories are available to the subsistence human via ZERO infrastructure. In other words, our thermodynamic needs are 'fitted' to our environment. It is only via the industrial revolution and immense orders of magnitude more fossil fuel consumption that the industrial complex is realized, a pre-requisite for Watson, let alone some digital mind. As such, Bostrom is not just making some wild assumptions about the feasibility of digital minds, they are demonstrably incorrect assumptions, once one takes into account embodied costs. I'm constantly amazed how very smart and respected people don't take into account embodied costs. Again, if one is going to assume that "digital minds" are going to take over their own means of production then: 1) they aren't less expensive than humans, and 2) General intelligence will have to be realized, and there is only one proof of concept for that, namely, animal minds, not digital minds. And to go from totally human dependent AI (175KWatts) to embodied AGI (100Watts) some major assumptions need to be challenged.
@Myrslokstok
@Myrslokstok 8 жыл бұрын
True. But not all humans have an IQ off 150. So if you could build one off those it would be worth it. In the end only the religius will argue we are better. And most people are not that creativeand and love change. An advaced robot with like 115 IQ would divade people in the good and the bad. And 99% off humanity could bee replaced.
@PINGPONGROCKSBRAH
@PINGPONGROCKSBRAH 8 жыл бұрын
Joshua Augustus Bacigalupi Look, I think we can both agree that there are animals that consume more energy than humans which are not as smart as us, correct? This suggests that, although humans may be energy efficient for their level of intelligence, further improvements could probably be made. Furthermore, it's not all about intelligence per unit of power. Doubling the number of minds working on a problem doesn't necessarily half the time it takes to solve. You get exponentially diminishing returns as you add more people. But having a single, extremely smart person work on a problem may yield results that could never have been achieved with the 10 moderately intelligent people.
@Myrslokstok
@Myrslokstok 8 жыл бұрын
Just think if we could have a phone in to our brains so we could have Watson, Google translate, Wolfram Alpha and internet and apps in our thoughts, we be still kind off stupid, but boy what a strange ting with a superhuman that is still kind off stupid inside,
@dannygjk
@dannygjk 8 жыл бұрын
+Joshua Augustus Bacigalupi Bear in mind how much power the computers of the 1950's required which had tiny processing power compared to today's computers and this will probably continue in spite of the limits of physics. There are other ways to improve processing power other than merely shrinking components, and that is only speaking from the hardware point of view. Imagine when AI finally develops to the point where hardware is a minor consideration. Each small step in AI contributes and just as evolution eventually produced us as a fairly impressive accomplishment I think it's a safe bet that AI will eventually be also impressive even if it takes much longer than expected. As many experts are predicting it's only a matter of how long not if it will happen.
@adamarmstrong622
@adamarmstrong622 4 жыл бұрын
Is that mr ray asking questions he knows nick and him both already know the answer to?
@alienanxiety
@alienanxiety 8 жыл бұрын
Why is it so hard to find an video of this guy with decent audio. He's either too quiet or peaking too high (like this one). Limiters and compressors, people - look into it!!!
@MatticusPrime1
@MatticusPrime1 5 жыл бұрын
Good talk. I enjoyed the book Superintelligence though I found it to be dense and a bit esoteric at times. This talk was much more accessible.
@sebastianalegrett4430
@sebastianalegrett4430 4 жыл бұрын
nick bostrom needs to run the world rn or we are all dead
@firstal3799
@firstal3799 5 жыл бұрын
He has been my favorite philosopher before he became more well known, small as it is even now.
@BeyondBorders00
@BeyondBorders00 6 жыл бұрын
Great topic to cover so please keep coverage on this subject. Five stars ⭐⭐⭐⭐⭐
@sterlincharles8357
@sterlincharles8357 7 жыл бұрын
I disagree with the first person in the QnA in the aspect of the millions and billions of us having the super technology. He believed that we harness the superior technology at the moment and once the burst occurs we would not have a central power in charge of the technology. However, this is not what we see if you look at evidences today. One could argue in the case of Google for instance. We certainly use the technology and it is useful, but the technology still remains centralized in terms of one big company having the resources to do research and us using the tools it has created. I don't believe we as a mass ever have the most up to date technology and that is because the incentives the powers that be to keep the cutting edge innovations from being known at the time of its discovery are far greater than releasing all the advancements at once. wow, I didn't think I was going to write this much.
@alexandermoody1946
@alexandermoody1946 Жыл бұрын
How long will intelligent entities take to predict, understand and undertake all jobs in a blacksmith/ fabrication/ engineering workshop while having human level or greater interest in nature, science, philosophy, creativity and love for their family whilst also having a concept of free will. I hope we can become friends.
@mariadoamparoabuchaim349
@mariadoamparoabuchaim349 2 жыл бұрын
Sim estamos numa simulação de computadores. (O universo é matemática é FÍSICA quântica)
@TheZakkattackk
@TheZakkattackk 8 жыл бұрын
[inaudible] = "on our".
@UserName-nx6mc
@UserName-nx6mc 8 жыл бұрын
[45:24] Is that Ray Kurzweil ?
@AFractal
@AFractal 3 жыл бұрын
Great talk, too bad most if us will not even know anything about the black balls being created or if they already have been until it is too late. I guess that is the point. Nothing new under the sun, so chances are we have done this all before. Good conversations.
Consciousness in Artificial Intelligence | John Searle | Talks at Google
1:10:38
Мы играли всей семьей
00:27
Даша Боровик
Рет қаралды 5 МЛН
GADGETS VS HACKS || Random Useful Tools For your child #hacks #gadgets
00:35
ШЕЛБИЛАР | bayGUYS
24:45
bayGUYS
Рет қаралды 633 М.
Physicist, Michio Kaku take on the future of humanity at Google I/O (2019)
42:42
Google for Developers
Рет қаралды 295 М.
Rizwan Virk | The Simulation Hypothesis | Talks at Google
50:24
Talks at Google
Рет қаралды 114 М.
An AI... Utopia? (Nick Bostrom, Oxford)
1:45:02
Skeptic
Рет қаралды 22 М.
Yuval Noah Harari | 21 Lessons for the 21st Century | Talks at Google
58:48
Talks at Google
Рет қаралды 2,8 МЛН
Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds
1:00:15
Future of Life Institute
Рет қаралды 647 М.
Wait but Why? The Superintelligence Road | Tim Urban | Talks at Google
51:50
2018 Isaac Asimov Memorial Debate: Artificial Intelligence
2:02:21
American Museum of Natural History
Рет қаралды 682 М.
The Education of a Value Investor | Guy Spier | Talks at Google
1:05:08
Talks at Google
Рет қаралды 239 М.
Google I/O 2024 - ИИ, Android 15 и новые Google Glass
22:47
iPhone green Line Issue #iphone #greenlineissue #greenline #trending
0:10
Rk Electronics Servicing Center
Рет қаралды 3,8 МЛН
#Shorts Good idea for testing to show.
0:17
RAIN Gadgets
Рет қаралды 3,6 МЛН
Радиоприемник из фольги, стаканчика и светодиода с батарейкой?
1:00
What % of charge do you have on phone?🔋
0:11
Diana Belitskay
Рет қаралды 350 М.
Apple. 10 Интересных Фактов
24:26
Dameoz
Рет қаралды 75 М.