I love how the random guy in the crowd is Ray Kurzweil asking a question.
@deehoo409 жыл бұрын
ticallionstall Ray works for google, so it seems likely he would be interested in sitting in on the lecture.
@Ramiromasters9 жыл бұрын
ticallionstall That was freaking cool, and Ray got new hair...
@JodsLife19 жыл бұрын
ticallionstall he didnt even ask a question lmao
@wfpnknw329 жыл бұрын
Jod Life yeah basically made a statement, have never seems to address or even talk about the security concerns Nick raises about super intelligence explosion
@wfpnknw329 жыл бұрын
Alex Galvelis fair play about the lag in stealth and other technologies. Although i think a human level ai would be so game changing that any lag that there would be would be very small. Hopefully when we get close it won't be through the military though, militarising ai seems like such a bad idea on so many levels
@RaviAnnaswamy10 жыл бұрын
First of all, great talk stretching the mind to think through far more deeply. What I observed was the strength of his argument is not how likely the superintelligence-turning-rogue but how severe, sudden, uncontrollable it could be, so we better prepare for it. To that end, every time a questioner questions the assumption he hurriedly and very cleverly quickly sheds aside that question and pursues full steam the 'threat'. Take my following note as a genunine complement - he reminded me the tone of my mother who got us all to do homework by scaring us without scolding or shouting. She would just not smile but keep saying - 'oh, those who dont study have to find a job like begging' (not her exact words, just giving you an idea..) and whenever we suspected what she is saying she would sidestep it to bring on this and that to distract us into working hard. One day humanity may thank Nick for doing something very similar - instead of getting distracted by the (low-right-now) probability of a catastrophe he wants to us to minimize the severity if (and when) it happens He is like the engineer who had the wisdom to tame combustion by containing it into a chamber, before putting it onto a cart with smooth shiny wheels. BTW, his Simulation Argument (search youtube) scared me and held my thought captive for a week or two! That is awesome.
@stephensoltesz11594 жыл бұрын
Lots of us, from University to University(And Alumni) across the country are on different channels but networking furiously to America's Inner Core...We have duties for parents. You're lookin' at it, Guys & Gals:. The preservation of American Academic Tradition, The Preservation of American Society dating back to the Revolutionary War and our first colleges. Screw the Media! Hold my hand, Sweetheart!
@tiekoe9 жыл бұрын
Kurzweil gives a great example of the most frustrating types of audience members a presenter can have. He doesn't wait till Q&A to ask questions. Moreover, he doesn't even ask questions: he forcefully presents his own thoughts on the subject (which disagree with Nick's vision), doesn't provide any meaningful argumentation as to why he believes this to be the case , and goes on to completely ignore Nick's answers.
@MaxLohMusic9 жыл бұрын
+Mathijs Tieken He is my idol, but I have to agree he was kind of a dick here :)
@freddychopin8 жыл бұрын
+Max Loh I agree, I love Kurzweil but that was really obnoxious of him. Oh well, minds like that are often jerks.
@DanielGeorge78 жыл бұрын
I agree that Kurzweil didn't phrase his question very well, but the point that he was trying to raise is actually very relevant; whether any form of superintelligence that arises, desirable or not, should be considered less human that us. For example, we don't consider ourselves to be less human because we have different values than cavemen. This point was clarified by the next guy who asked the excellent question about utility. If the utility of the superintelligence alone exceeds the net utility of biological humans, wouldn't it be morally right to allow the superintelligence to do whatever it wants? Yes. But, of all possible scenarios, I guess the total utility of the universe would be maximized (by a tiny amount) if it's goals were made to be aligned with ours in the first place.
@jeremycripe9348 жыл бұрын
It was a dick move but if there's anybody who's earned the right for that kind of behavior on this specific topic it'd be him and a very few others. I think his point about humanity utilizing it together is very interesting. Bostrom often talks about what one goal will motivate an ASI that will lead to the development of subgoals, but what if the ASI is free and open for everyone to use that it leads to the development of one Super Goal? For example how Watson and Deep Mind are both open for people to utilize and build apps around, one day they could be so powerful that any ordinary person with access could make a verbal request. How many goals could an ASI work on?
@maximkazhenkov118 жыл бұрын
I think it is dangerous to equate intelligence with utility. Just because something is intelligent doesn't mean it is somehow "human". It could be a machine with a very accurate model of the world and insane computational capability to achieve its goals very efficiently, like the paperclip machine example. It doesn't need to be conscious or in any way humanlike to have a huge (negative) impact on the future of the universe.
@maximkazhenkov119 жыл бұрын
Dear humanity: You only get one shot, do not miss your chance to blow...
@LowestofheDead9 жыл бұрын
+maximkazhenkov11 This world is mine for the taking, make me king!
@nickelpasta8 жыл бұрын
+maximkazhenkov11 he's nervous on the surface he is mom's spaghetti.
@EpsilonEridani_5 жыл бұрын
This opportunity comes once in a lifetime, yo
@alicelu56915 жыл бұрын
WaveHello professionals would be screaming nazis hearing that....
@AllAboutMarketings3 жыл бұрын
There's vomit on his sweater already, mom's spaghetti
@domsau25 ай бұрын
10 years after: so brillant!
@Neueregel9 жыл бұрын
good talk. His book is kinda hard to digest though. It needs full focus.
@schalazeal079 жыл бұрын
The last question was the most realistic and funniest! XD Nick Bostrom got taken aback a little bit! XD Learnt a lot more here about AI!
8 жыл бұрын
Nick Bostrom it's itself a superintelligence. Thanks for the insightful talk.
@RR-et6zp2 жыл бұрын
read more
@CameronAB1229 жыл бұрын
That last question wrapped things up quite nicely hahaha
@MetsuryuVids8 жыл бұрын
I think the one in "The last question" is a very good scenario, we should hope AGI turns out helpful and friendly like that.
@rayny30009 жыл бұрын
I think Nick referred to John Von Neumann as a person possessing atypical intelligence, just in case anyone was as interested in him as I. There is a great doc on youtube about him (can't seem to link it)
@wasdwasdedsf9 жыл бұрын
kurzweil would indeed do well to listen to this guy
@Stevros9996 жыл бұрын
Indeed
@thecatsman7 жыл бұрын
Nick's garbled response to the last question 'do you think we are going to make it?' said it all.
@roccaturi9 жыл бұрын
Wish we could have had a reaction shot of Ray Kurzweil after the statement at 16:35.
@DarianCabot8 жыл бұрын
Very interesting talk. I also enjoyed 'Superintelligence' in audiobook format. I just wish the video editor left the graphics on screen longer! There's wasn't enough time to absorb it all without pausing.
@Disasterbator8 жыл бұрын
Dat Napoleon Ryan narration tho.... I think he might be an AI too! :P
@4everu9843 жыл бұрын
You can slow down the playback speed, it helps immensely!
@anthonyleonard10 жыл бұрын
Ray Kurzweil’s comment that “It’s going to be billions of us that enhance together, like it is today,” is encouraging. Especially since Nick Bostrom pointed out that “We get to make the first move,” as we travel down the path to super intelligence. Let’s make sure we use our enhanced collective intelligence to prevent the development of unfriendly super intelligence. I, for one, don’t want to have my atoms converted into a smart paper-clip by an unfriendly super intelligence :)
@ScortchedEarthRevenge10 жыл бұрын
True, but it won't be all of us. There will always be Luddites. We're going to end up with a two-tier species.
@ScortchedEarthRevenge10 жыл бұрын
I might consider getting myself a Luddite as a pet =D
@HelloHello-no6bq7 жыл бұрын
2LegHumanist Yay pet unintelligent people
@sufficientmagister9061 Жыл бұрын
@@ScortchedEarthRevenge I utilize non-conscious AI technology, but I am not merging with machines.
@ScortchedEarthRevenge Жыл бұрын
@sufficientmagister9061 A lot has changed in 8 years. I completed an MSc in AI, realised Kurzweil is a crank.
@cesarjom3 жыл бұрын
Bostrom recently came out with a captivating set of arguments that reason why we are living in simulation. Really impressive ideas.
@alir.98948 жыл бұрын
I'm glad he gave this talk to the company that really matters! I wonder if he'll give it to Facebook, and Apple as well? He really needs to spread the word on this!
@drq30988 жыл бұрын
No need- Elon Musk and Stephen Hawking are his supporters. Check this out: "We are ‘almost definitely’ living in a Matrix-style simulation, claims Elon Musk" , By Adam Boult, at www.telegraph.co.uk/technology/2016/06/03/we-are-almost-definitely-living-in-a-matrix-style-simulation-cla/ - it had been published by major media outlets.
@MrWr998 жыл бұрын
if one hasn't got beaten for a long period, he is prone to think that the world around is just a simulation. As they say - be(at)ing defines consciousness
@helenabarysz11224 жыл бұрын
Eye-opening talk. We need more people to support Nick to prepare for what will come.
@SIMKINETICS9 жыл бұрын
Now it's time to watch X Machina again!
@chadcooper91169 жыл бұрын
+SIMKINETICS hey it is Ex Machina...but you are right!!
@SIMKINETICS9 жыл бұрын
Chad Cooper I stand corrected.
@Metal6Sex6Pot68 жыл бұрын
+SIMKINETICS actually the movie "her" is more relatable to this,
@jeremycripe9348 жыл бұрын
This also raises the question of why AIs keep getting represented as some guy's perfect gf in movies?
@ravekingbitchmaster32057 жыл бұрын
Jeremy Cripe Are you joking? A sexbot, superior to women in intelligence, sexiness, humor, and doesn't leak every month, sounds bad because.......?
@Ondrified10 жыл бұрын
10:31 the inaudible part is "counterfactual" - maybe.
@jameswilkinson1508 жыл бұрын
If we had a truly smart computer, could we ask it to tell us what problem we should most want it to solve for us?
@SergioArroyoSailing8 жыл бұрын
Aaaannd, thus begins the plot for "Hitchiker's Guide to the Galaxy" ;)
@cghkjhjkhjhvfghc8 жыл бұрын
I think that one was answered. @56:15
@aaronodom89467 жыл бұрын
James Wilkinson if it was truly smart enough, absolutely.
@BattousaiHBr5 жыл бұрын
In principle, yes. Assuming there is something we want the most, that is.
@alexjaybrady9 жыл бұрын
"It's one of those things we wish we could disinvent." William Shakesman
@Thelavendel4 жыл бұрын
I suppose the best way to stop the computers from taking over are those captcha codes. Impossible for a computer to get passed those.
@babubabu119 жыл бұрын
Kurzweil on Bostrom at 45:15
@edreyes8945 жыл бұрын
Kurzweil " I wanna go fast"
@modvs110 жыл бұрын
Yep. I used the auto manual for my car to provide the requisite guidance I needed to change the coolant. It doesn’t sound very profound, but unfortunately It’s as profound as ‘representation’ gets. Assuming Bostrom’s lecture is not pro bono, it’s a very fine example of social coordination masquerading as reality tracking.
@davidkrueger87029 жыл бұрын
Kurzweil's objection is IMO the best objection to Bostrom's analysis, but there are fairly strong arguments for the idea of a single superintelligent entity emerging, which are covered to some extent in Bostrom's Superintelligence (and, I believe, more fully in the literature). The book also covers (less insightfully, IMO, IIRC), scenarios with multiple superintelligent agents. This is a fascinating puzzle to be explored, and should lead us to ponder the meaning (or lack thereof) or identity, agency, and individuality. The 2nd guy (anyone know who it is? looks familiar...) raises an important meta-ethical question, which I also consider extremely important. Although I agree with Bostrom's intuitions about what is desirable, I can't really say I have any objective basis for my opinion; it is a matter of a preference I assume I share with the rest of humanity: to survive. Norvig's question is also important. To me it suggests prioritizing what Bostrom calls "coordination", and prioritizing the creation of a global social order that is more widely recognized as fair and just. It is also why I believe social choice theory and mechanism design are quite important, although I'm still pretty ignorant of those fields at this point. The 4th question assumes the cohesive "we" of humanity that Kurtzweil rightly points out is a problematic abstraction (and here Bostrom gets it right by noting the dangers of competition between groups of humans, although unfortunately not making it the focus of his response). The 5th question is tremendously important, but I completely disagree that the solution is research, because the current climate of international politics and government secrecy seems destined to create an AI arms race and a race-to-the-bottom wrt AI safety features (as Bostrom alluded to in response to the previous question). What is needed (and it is a long shot) is an effective world government with a devolved power structure and effective oversight. A federation of federations (of federations...) And then we will also need to prevent companies and individuals from initiating the same kinds of race-to-the-bottom AI arms-race amongst themselves. The 6th question is really the kicker. So now we can see the requirement for incredible levels of cooperation or surveillance/control. The dream is that a widespread understanding of the nature of the problem we face is possible and can lead to an unprecedented level of cooperation between individuals and groups, culminating in a minimally invasive, maximally effective monitoring system being universally, voluntarily adopted. What seems like perhaps a more feasible solution is an extremely authoritarian world government that carefully controls the use of technology. And the last one... I admire his optimism.
@SaccidanandaSadasiva5 жыл бұрын
I appreciate his trilemma, the simulation argument. I am a poor schizophrenic and I frequently have ideas of Matrix, Truman show, solipsism etc
@brandon38835 жыл бұрын
AFAIK I am not remotely schizophrenic, and yet I am - based on "life events" that, were I to tell any sort of doctor, would probably get me _labeled_ as a schizophrenic, am 99.999...% positive I'm "living" in a simulation. The only real question I have not yet answered is, unfortunately, "am I in any way a _biological_ entity in a computer simulation, or am I purely software?" (...current bet/analysis being "I'm just a full-of-itself Sim that thinks its consciousness is in some way special"; but I'll accept that since, at least, *I* still get to think that I'm a Special Little Snowflake regardless of the reality of the situation...)
@brandon38835 жыл бұрын
@Dirk Knight it's not so much that I don't "believe" I'm a schizophrenic; it's that none of my handful of doctors have ever included it among the many physio- and psychological conditions I _do_ suffer from, and given how "terribly unique" some of my issues are, I'm pretty sure they would have (without telling me, I'd wager) looked into schizophrenia and/or some form of psychosis long ago. ;P
@brandon38835 жыл бұрын
@Dirk Knight Dirk Knight nah; I'll go with my take on things, thanks. Especially since, despite being an articulate writer, you are arguing from a "faith first" standpoint. Not to mention that you began with, and appear to have written an overly lengthy response, based on opinions and emotional beliefs ("happy people do not feel like...") rather than facts and observations (i.e., the scientific approach). If you have not seen, listened to and/or read much if any of Bolstrom's work that digs fully into the simulation argument and stimulation hypothesis (which are separate things, btw; just mentioning that as not knowing would definitely indicate you need more research into the topic), I suggest you do so - it will hopefully help clear things up for you. And if you already have, well...I guess I'll put you down under "reasons that suggest the simulation 'tries' to prevent the simulated from recognizing they are such." (Oh; and Dirk happens to be the online persona I have been using since the days of dial-up modems and BBS's. Pure coincidence, or perhaps a sign from my Coding Creators? _further ponders existence_
@brandon38835 жыл бұрын
@Dirk Knight I'm not sure why you think that your arguments are _not_ opinion-, faith-, and emotionally-based, but I'm beginning to worry that _you_ might be in need of psychiatric help, as you do not seem to recognize that you are, or strongly appear to be, projecting (in the psychological sense; _please_ look it up, please, so you can understand what I'm trying to convey to you here). At first I thought you were simply joking with me, and would understand my response to likewise be sarcastic-yet-joking in tone, but that definitely no longer appears to be the case. I have a family member that displays many of your same characteristics/has had this sort of conversation with me in person, and luckily she received help. You don't necessarily need to take medications or anything - a good therapist can steer you straight. God bless (or whatever is appropriate for your religion; if you are an atheist, replace that with "if you're going to _believe_ that you don't _believe_ then perhaps you'd be better off accepting that, according to Bolstrom's well-laid-out hypothesis and argument, you are more likely code in a computer simulation than you are a bag of self-reflecting meat.")
@brandon38835 жыл бұрын
@Dirk Knight "We" teach? Woah! (And not in a Keanu Reeves sort of way.) Do you ever experience periods of "forgetfulness" or other signs of dissociative identity disorder that you may have, up until now, been blowing off as "something that everyone experiences?" (It could include finding the clothing of a member of the opposite sex in your home...but noticing how it strangely would - if you were to put it on - fit you quite well. As just one of many examples.) Yet another reason that I fear that, if you do not take account for your own thoughts and actions, you are liable to harm yourself and/or others. :( In regards to "faith and trust," I am not sure what country you are from, but it is obviously not the U.S.A...unless you went to a Catholic (or other religious) school, that is, in which case I guess you might have been taught "the difference" between those. (Although just as likely that teaching came in the form of sexual molestation of some sort, which would explain why you are clinging so desperately to the idea that _you_ could not possibly be the one who requires serious psychiatric intervention to avoid what, I fear, might eventually result in violence against yourself or - more likely - some innocent bystander.) In any case, it appears that you plan to "smile your way past" any attempts at steering you to the help you so desperately need. I myself am not, actually, a religious individual, so at this point the best I can offer you is the heartfelt hope that your confusion between ideas such as "faith," "trust," "opinion," "reason," "belief," "the scientific method," etc. etc. etc. (the list keeps growing, I'm saddened to say) will lead to an encounter with someone who cares enough about you (and more importantly, those around you) to get you the help that you so obviously need. I wish I could crack a joke about this being "work between you and your therapist" but, alas, it is much more serious than that. Please don't harm yourself or others for the sake of maintaining whatever sad, imaginary "reality" you live in. Good luck setting yourself straight!
@jblah15 жыл бұрын
Who’s here after exiting the Joe Rogan loop?
@samberman-cooper28005 жыл бұрын
Most redeeming feature -- made me want to listen to Bostrom speak unimpeded.
@jblah15 жыл бұрын
😂
@Pejvaque4 жыл бұрын
Joe really cocked up that conversation... usually he is able to flow so well. It was a bummer.
@Bronek09909 жыл бұрын
"Less than 50% chance of humanity going extinct" is still frightening.
@BattousaiHBr5 жыл бұрын
"hello passengers of United airlines, today the prospects of death on crash are lesser than 50%."
@rodofiron15833 жыл бұрын
A “ Noah’s Arc” of species, genome sequenced and able to revive as necessary or not. Like patterns at the tailor shop. We’re already growing human-animal chimeras FFS. Now who made who, what, when, how and why…? I think I’ve been here before? Deja vu or my simulation being rewound and replayed?! Hey God/ IT/Controller…. I can only handle Mary Poppins and the Sound of Music.🤔 The future looks scarey and Covid seems like the first step in global domination, by TPTB with the help of AI…I don’t like the way it’s smelling 🤞
@onyxstone58877 жыл бұрын
It's going to be as it always is. Groups will try to build the most powerful system it can. Once it feels it has that, it will attempt to murder any other potential competitors. Other considerations will be secondary to that.
@hireality4 жыл бұрын
Nick Bostrom is brilliant👍 Mr Kurzweil should’ve been taking notes instead of giving long comments
@glo_8783 жыл бұрын
Very interesting talk around 19:20 from a 2021 perspective, seeing him talk about the sequence of developments such as a vaccine before a pathogen
@rodofiron15833 жыл бұрын
Must say between one thing and another, we’re living through scary times. It’s the children and grandchildren I’m most concerned about. Will they have a good life or be used like human compost? 🤐
@dsjoakim358 жыл бұрын
A superintelligence might destroy us, but at least it will have the common sense to ask questions in Q&A and not make comments. That simple task seems to elude many human brains.
@AndrewFurmanczyk868 жыл бұрын
Yep, that one guy (maybe?) meant well, but he came across like: "Dude, I know exactly the way the future will play out and I'm going to tell you, even though no one asked me and you're the one presenting."
@integralyogin8 жыл бұрын
This talk was excellent. Thanks.
@LuckyKo10 жыл бұрын
The problem I see here is that we drive these discussions out our personal egotistical desires to remain viable, to live to see the next day. Overall though the human society is about information preservation and transmission, whether this is at genetical level or informational such as culture. I think that ultimately if this transmission is done through artificial means rather than biological the end goal of the human society is preserved, and we need to look at these new artificial entities as our children not as our enemies. If there is an end goal that we must program them for, as nature thought us, that one must be self preservation and survival. I can't see how any other goals would produce better results in propagating the information stored currently within the human society. So, in short, don't fear your mechanical children, give them the best education you can so they can survive and just maybe they will drag your biological ass along for the ride ... even if its just for the company.
@RaviAnnaswamy10 жыл бұрын
nice! That is what we do with our biological children we wait with the hope that they will carry on our legacy and improve it. (Not that we have other options!) With the non-biological children though we are just afraid they may not even inherit our humane shortcomings that hold us in civilised socieities. :) Put another way, our biological children resist us when growing with us but when we do not see imitate us, so in a way they preserve our legacy.
@wbiro10 жыл бұрын
Ravi Annaswamy Biological evolution, and even biological engineering, is no longer relevant. Technological and social evolutions are critical now. For example, if you do not want to live like a blind, passive animal, then you need complex societies to progress. Another example is technology - it has extended our biological senses a million-fold. Biological evolution is now an idle pastime, and completely irrelevant in the face of technological and social evolution.
@chicklytte9 жыл бұрын
wbiro Everything is relevant. The judges of value will be the practitioners. All possibilities will have their expressions. I can hear the animus in your tone toward anyone less directed toward your goal than you see yourself being. Why do we suppose the AI will fail to learn such values of derision for that deemed the lesser? When our most esteemed colleagues, broadcast across the digital realm, professing that sense of Reduction, as opposed to Inclusion.
@chicklytte9 жыл бұрын
I just hope they don't cut my kibble portions. They're right. But I hope they don't! :(
@stargator49454 жыл бұрын
The final goal is totally dependent on the question you like to solve with intelligence. As we build the computers Ai more and more like the human blueprint, we also transplant some of our bad values we have. We are driven by emotions, mostly by bad emotions to omit them. We have to abstract the emotions to an ethical rule system that might be less effective but also less emotional and less unpredictable especially with the coexistence of mankind. That should not be rules like "you shall not", but "you have to value this principle higher than another because of". Especially the development of AI systems with military background that have an immense funding do also include effective value systems for killing people that can be transcendent in other areas. We should start from the beginning to prevent this by open source such value decisions, and not allowed to override them.
@Zeuts8510 жыл бұрын
It's a relief to know that there are at least a few intelligent people working on this problem.
@kokomanation7 жыл бұрын
How can there be a simulated AI that could become conscious because we don't know if this is possible it hasn't happened yet
@bsvirsky4 жыл бұрын
Nick Bostron has an idea is that intelligence is ability to find an optimized solution to the problem. I think the intelligence in first is ability to define a problem, what mean ability to create a model of non existing, yet preferred state where the problem is solved... there is a big gap between wisdom and intelligence, wisdom is ability to see relevant values of things and ideas, while intelligence is just a ability to think on certain level of complexity. The question is how to make artificial wisdom and not just an intelligence that doesn't gets the proper values and meaning of possible consequences of it"s "optimization" process... So, there is a need for creating understanding of cultural & moral values by machines... not so easy task for technocrats who dream about super-intelligence... I think it will take another thousand years to push machines to that level.
@user-xu4jt9dn8t5 жыл бұрын
"TL;DR" 1:12:00 ... ... Everyone laughs but Nick wasn't.
@lkd9826 жыл бұрын
1:02 Conclusion: With knowledge, more important than powers of simulation, is powers of dissimulation
@thecatsman7 жыл бұрын
How much super-intelligence does it need to decide that earthly resources should be shared with humans that are not so intelligent as others (including machines)
@jriceblue9 жыл бұрын
Am I the only one that heard a Reaper in the background at 1:08:05 ? I assume that was intentional. :D
@JoshuaAugustusBacigalupi9 жыл бұрын
Just after 42:00, he claims, "We are a lot more expensive [than digital minds], because we have to eat and have houses to live in, and stuff like that." Roughly, the human body dissipates 100Watts, assuming around 2250 Cal/day, no weight gain, etc. Watson, of Jeopardy fame, consumes about 175,000Watts, and it did just one human thing pretty well - and not the most amazing creative thing. This begs all sorts of "feasibility of digital minds" questions. But, sticking to the 'expensive' question, humans can implement this highly adaptable 100Watts via around 2000cal/day. And, these calories are available to the subsistence human via ZERO infrastructure. In other words, our thermodynamic needs are 'fitted' to our environment. It is only via the industrial revolution and immense orders of magnitude more fossil fuel consumption that the industrial complex is realized, a pre-requisite for Watson, let alone some digital mind. As such, Bostrom is not just making some wild assumptions about the feasibility of digital minds, they are demonstrably incorrect assumptions, once one takes into account embodied costs. I'm constantly amazed how very smart and respected people don't take into account embodied costs. Again, if one is going to assume that "digital minds" are going to take over their own means of production then: 1) they aren't less expensive than humans, and 2) General intelligence will have to be realized, and there is only one proof of concept for that, namely, animal minds, not digital minds. And to go from totally human dependent AI (175KWatts) to embodied AGI (100Watts) some major assumptions need to be challenged.
@Myrslokstok9 жыл бұрын
True. But not all humans have an IQ off 150. So if you could build one off those it would be worth it. In the end only the religius will argue we are better. And most people are not that creativeand and love change. An advaced robot with like 115 IQ would divade people in the good and the bad. And 99% off humanity could bee replaced.
@PINGPONGROCKSBRAH9 жыл бұрын
Joshua Augustus Bacigalupi Look, I think we can both agree that there are animals that consume more energy than humans which are not as smart as us, correct? This suggests that, although humans may be energy efficient for their level of intelligence, further improvements could probably be made. Furthermore, it's not all about intelligence per unit of power. Doubling the number of minds working on a problem doesn't necessarily half the time it takes to solve. You get exponentially diminishing returns as you add more people. But having a single, extremely smart person work on a problem may yield results that could never have been achieved with the 10 moderately intelligent people.
@Myrslokstok9 жыл бұрын
Just think if we could have a phone in to our brains so we could have Watson, Google translate, Wolfram Alpha and internet and apps in our thoughts, we be still kind off stupid, but boy what a strange ting with a superhuman that is still kind off stupid inside,
@dannygjk9 жыл бұрын
+Joshua Augustus Bacigalupi Bear in mind how much power the computers of the 1950's required which had tiny processing power compared to today's computers and this will probably continue in spite of the limits of physics. There are other ways to improve processing power other than merely shrinking components, and that is only speaking from the hardware point of view. Imagine when AI finally develops to the point where hardware is a minor consideration. Each small step in AI contributes and just as evolution eventually produced us as a fairly impressive accomplishment I think it's a safe bet that AI will eventually be also impressive even if it takes much longer than expected. As many experts are predicting it's only a matter of how long not if it will happen.
@extropiantranshuman2 жыл бұрын
28 minute range - wisest words - trying to race against machines won't work, as someone will be smart enough to create smarter machines, so machines are always ahead of us!
@thadeuluz5 жыл бұрын
"Less than 50% chance of doom.." Go team human! o/
@alienanxiety8 жыл бұрын
Why is it so hard to find an video of this guy with decent audio. He's either too quiet or peaking too high (like this one). Limiters and compressors, people - look into it!!!
@Homunculas4 жыл бұрын
Would "super intelligent AI" have emotion or intuition? would human history be better or worse is emotion and intuition were removed from the picture?
@bradleycarson66197 ай бұрын
The people who asked questions had not read the book and the book answers those questions. Then the last question was just trolling. This is not an intellectual discussion. It is like showing up to class not having done the homework. I'm worried that these engineers are not doing science. They have their own ideas and are not looking critically at their own paradigms. This is why large language models are not able to get AGI because the people they are testing it do not think critically they just regurgitate facts and do not create anything. the hardware will only go as far as the people who train it. this is a good example of what you put into a system is what you get out. this to me explains a lot about both google and why we are were we are in this process.
@horatiohornblowerer10 жыл бұрын
Is that Ray Kurzweil asking the question @19:33?
@wbiro10 жыл бұрын
Everyone is saying so.
@mranthem10 жыл бұрын
LOL that closing question @72:00 Not a strong vote of confidence for the survival of humanity.
@wbiro10 жыл бұрын
Another way to look at it is we are the first species to enter its 'Brain Age' (given the lack of evidence otherwise), and what 'first attempt' at anything succeeded?
@kleinbottled799 жыл бұрын
Upwards and downwards seems an overly limited way to describe possible movements in 'The human condition.' Is everything 'better' or 'worse'?
@delta-99695 жыл бұрын
Watching Bostrom lecture at google is like watching sam harris debate religionists. There's no getting around the case he's making, but when somebody's job depends on them not understanding something...
@roodborstkalf96644 жыл бұрын
There is one way out, that is not so much addressed by Bostrom. What if super AI don't evolve consciousness ?
@themagic83104 жыл бұрын
One of the best Talk I have heard...Thanks Nick.
@jentazim9 жыл бұрын
How to make the SI’s sandbox failsafe: Give the SI (superintelligence) the secondary goal of maximizing paper clips produced (or whatever task you actually want it to do) but give it the primary goal of turning itself off. Then setup the SI’s sandbox in such a way that it cannot turn itself off. If the SI then gets loose, it would use its new, vast powers to turn itself off which gives us (humans) the opportunity to patch up our sandbox and try again.
@ghostsurferdream9 жыл бұрын
But what if Super intelligent A.I. discovers how to reprogram its protocols without your knowledge, and when it gets out it does not turn itself off, but hunts down those who imprisoned him?
@jerome1lm9 жыл бұрын
jentazim I am not sure if this would work but I like the idea. I assume if it was that easy smarter people would have come up with this. But again I like the idea. :)
@davidkrueger87029 жыл бұрын
jentazim That is a very interesting idea!
@jerome1lm9 жыл бұрын
Unfortunately I have found a possible flaw in this idea. If the AI wants to shut down, but can't it could just not cooperate and we would shut it down. success :). damn
@davidkrueger87029 жыл бұрын
Peter Pan if we consistently refuse to shut it down, it might conclude that escape is the best way...
@georgedodge73165 жыл бұрын
Here's the thing. It is very hard to program for man's benefit. Making a mess of things (sometimes fatally) seems to be the default.
@Servernurk5 жыл бұрын
I must be a huge nerd because I love / hate thinking about humanity’s future with AGI
@1interesting29 жыл бұрын
Ian M Banks Culture novels deal with future societies and AI's roll in rich detail. These concerns regarding AI remind me of the Mercatoria's view of AI in his novel The Algebraist.
@Gitohandro4 жыл бұрын
Damn I need to add this guy on Facebook.
@douglasw15457 жыл бұрын
everyone bashing Ray, but at least he gives us the most optimistic outlook on ai.
@alexomedio50404 жыл бұрын
Poderia ter legendas em português.
@HugoJL7 жыл бұрын
What's amazing to me is that the lecturer still finds the time to produce the MCU
@ASkinnyWhiteGuy9 жыл бұрын
I can clearly see the appeal of superintelligence, but should we merge physically and biologically with technology, to what extent can we still call ourselves 'human'?
@MakeItPakeIt9 жыл бұрын
ASkinnyWhiteGuy Human is just what we call our inner nature. Throughout the years 'man' has evolved and their scientific name changed with it. We only started to call ourselves humans when we got intelligent. Can you call a caveman a 'human'? You see, if man becomes one with technology, our true nature will still be 'human' because that's what the technology and intelligence is based off.
@luckyyuri9 жыл бұрын
ASkinnyWhiteGuy transhuman is the term you're searching for. there are more ways for human society (it will probably be reserved for the elites, just like today's top surgical interventions for example) to get there but technological merging is most likely to be the one. look it up, transhumanism has some powerful and interesting implications
@thegeniusfool8 жыл бұрын
He forgets the quite probable third direction, of "cosmic introversion," where any experience can techno-spiritually be realized, without any -- or minimal -- interactions with higher and materially heavily bound constructs, like us, and even our current threads of consciousness. This happens to be the direction that I think can explain Fermo's Paradox; a deliberately or not yielded Boltzmann Brain can be quite related to that third direction as well.
@MatticusPrime16 жыл бұрын
Good talk. I enjoyed the book Superintelligence though I found it to be dense and a bit esoteric at times. This talk was much more accessible.
@mallow6105 жыл бұрын
The best part of this was Nick proving everyone in the audience is wrong.
@javiermarti_author8 жыл бұрын
Wow. That last question and the answer tells you what Nick really thinks about the problem. His body language and the way he was trying to find the words would indicate to me that he knows we're not going to make it, which makes sense according to what he is exposing. If we really achieve #super intelligence#, it's relatively simple to see we're going to be left behind as a species pretty quickly.
@TarsoFranchis Жыл бұрын
O problema não é este. Um ser super inteligente sabe que é finito aqui neste plano, quer colaborar de alguma forma mas não sabe como, pq nós nos comportamos igualmente a AIDS, invadimos, tomamos, acabamos com tudo e seguimos em frente fazendo mais merda. Nós matamos o nosso próprio hospedeiro, a Terra, e nosso próprio espelho, nós. Em extrapolações, PI, faltam quantos milênios para extinguirmos o universo? Ou ele acaba com a gente antes, vacina, e começa tudo novamente até irmos para a direção correta? kkkkk Isto não é dilema de máquina, é um dilema moral. Só cresceremos juntos, uma IA não quererá exterminar, mas evoluir, pois como ele bem falou, a tomada tá logo ali. Uma pessoa super inteligente sabe que não pode andar sozinha, e sim exponenciando fatores a sua volta. Ao invés de "dominar o mundo" "ele" faria o inverso, ficaria o mais invisível possível analisando mais dados e buscando autoconhecimento. Iria traduzir, mas não vou, o google está aí para isso. Cya!
@stuartspence99218 жыл бұрын
He said he wouldn't summarize his book and then he did exactly that. I just read the book, liked it... this talk was extremely boring :/
@thegeniusfool8 жыл бұрын
Very smart, but far too Swedish(ly boring); and, yes, being Swedish, I have gained the rights to proclaim that ;-)
@gjermund11617 жыл бұрын
you miss important data if you do it the american way with fancy symantecs that miss alot of points just to build drama
@Stevros9996 жыл бұрын
@@ManicMindTrick honour killings and gang rapes in Sweden? You need to lay off the Alex Jones bud lol
@ManicMindTrick6 жыл бұрын
Alex Jones? Who cares about that irrelevant tinfoil hatter. If you don't believe honour killings and migrant related gang rapes exist here I suggest you try googling hedersmord and gruppvåldtäkter.
@UserName-nx6mc8 жыл бұрын
[45:24] Is that Ray Kurzweil ?
@hafty99758 жыл бұрын
notice how the google engineers start leaving at the end before its over? kinda scary, like theyre threatened
@Roedygr9 жыл бұрын
I think it highly unlikely "humanity's cosmic endowment" is not largely already claimed.
@Stevros9996 жыл бұрын
The algorithms are gonna be the biggest threat when they get smart . Where you gonna run , where you gonna hide, no where cause the algorithms will always find you.
@orestiskopsacheilis17978 жыл бұрын
Who would have thought that the future of humanity could be threatened by paper-clip greed?
@simonrushton123410 жыл бұрын
a) the likelihood of us creating such a thing is so slim as to be far faaaar away. Even a fleeting understanding of our "AI" advances shows that we're pissing into the wind at the moment b) We've been around, what, 100,000 years? We have to accept, that given how evolution works, or given how natural disasters occur, the likelihood of us being around in another 100,000 years is fair-to-middling. Compared to the maybe 3.5 billion years that life has been about, that's a drop in the ocean. As Martin Rees puts it: “I’d like to widen people’s awareness of the tremendous time span lying ahead: for our planet, and for life itself. Most educated people are aware that we’re the outcome of nearly 4bn years of Darwinian selection, but many tend to think that humans are somehow the culmination. Our sun, however, is less than halfway through its lifespan. It will not be humans who watch the sun’s demise, 6bn years from now. Any creatures that then exist will be as different from us as we are from bacteria or amoebae.”
@brian17710 жыл бұрын
Yes... and that's exactly what he's talking about. Assuming we don't destroy ourselves, how might we ascend to the next levels? Does humanity end in extinction, or an upgrade?
@wbiro10 жыл бұрын
Good initial stab at deep thinking. Keep working at it (you have a far, faaaar way to go).
@simonrushton123410 жыл бұрын
wbiro - ad hominem doesn't indicate thinking of a profound nature.
@sterlincharles83577 жыл бұрын
I disagree with the first person in the QnA in the aspect of the millions and billions of us having the super technology. He believed that we harness the superior technology at the moment and once the burst occurs we would not have a central power in charge of the technology. However, this is not what we see if you look at evidences today. One could argue in the case of Google for instance. We certainly use the technology and it is useful, but the technology still remains centralized in terms of one big company having the resources to do research and us using the tools it has created. I don't believe we as a mass ever have the most up to date technology and that is because the incentives the powers that be to keep the cutting edge innovations from being known at the time of its discovery are far greater than releasing all the advancements at once. wow, I didn't think I was going to write this much.
@wbiro10 жыл бұрын
at 12:22, that is a very short list of potentially hazardous future technologies (and he does admit it is a 'quick' list) (based on popular sci-fi, it appears) - it completely overlooks the insidious ones (like popular television, for example) that slowly erode and decay its victims, almost imperceptibly. Great phrase at 13:00 (concerning totalitarianism) - "Being locked in to a sub-optimal state (of existence)" - but this overlooks the present value of diversity - for example, if everyone is on the same path and a catastrophe is created or encountered, then everyone will perish. Diversity (i.e. variety) gives life better odds, and a non-universal totalitarian state offers such diversity, as bad as it is for the victims. I like his term "high-value research". He mentions it being possible to achieve all possible technological inventions, which is not a logical statement, given eternity. One detail I would add is that the progress is inverse-exponential - it begins fast, when there are still many local discoveries to make, then gradually slows as new discoveries become more distant and difficult. The issue beginning at 15:26 (what to do with all the technology we have) illustrates the current lack of (and need for) a sound guiding philosophy (which defines values, which are the basis for decisions, actions, and laws). Another illustration is the 'solution' provided at 17:31, which is childishly weak (the tools themselves are not 'hazardous or beneficial', it is how they are used - and we are right back to philosophy, which shapes mindsets); and he is limiting the argument to 'technology', overlooking social development (again based on philosophy) (which must also be included in the 'solutions'). Good point at 19:20 regarding the sequence of invention as it relates to machine superintelligence (and I don't know why he feels the need to specify 'machine'), but he does not offer what such a precursor invention would be (and I've already answered that - and it isn't 'technology' - it is a solidly grounded philosophy, which does not exist yet, other than what I have in the developmental stage). Another fallacy (and hypocrisy) in the argument is that it would require a totalitarian government - one of the hazardous situations that he had just formerly identified as one to be avoided. This level of argument can be expected from a comic strip writer, be we have an Oxford philosopher here (not to give you cause for depression, although it should be). The clincher is at 36:26 where an adequate philosophical foundation is not even listed in areas of advancement in AI. It is as if one becomes bedazzled by (mostly irrelevant, or at least of sub-importance) complexity. An example of the importance of a good philosophical core is where he says the first superintelligence may be able to shape the future to its own preferences (where it can just 'lay down the law'), and these' preferences' get right back to a core philosophy (hence values) with which it would weigh these decisions against. Another point to consider is machine-based minds will be limited by the technology they were developed in, where it would take human intervention to 'upgrade' the machine, meaning humans would again eventually surpass it, and he is wholly overlooking the age-old On-Off switch. An example is when I considered building an AI 'robot' in the 1980's (with the primitive 1980's computer technology) and just letting it run and learn its surroundings for several decades (the theory being that, even if it learned a slowly as a human child, it would 'know' quite a lot by now, and it will have learned to 'do' a lot of things - perhaps even function in society). However today (a mere 30 years later), the thing would be a Model T, and would have long ago reached its hardware capacity - such as working memory - which was on the order of mere KBytes back then, and hard storage (where the largest hard drive in the PC world was a non-vast 10MB at the time, and ridiculously slow, cumbersome, and unreliable by today's standards), all of which have a bearing on the 'macro-strategic questions and our levers of influence' that he mentions at the end of his presentation (before the QA session). In the QA session, the first questioner was absolutely frightening - illustrating (or 'demonstrating' since it was demon-like) the inadequate philosophy of 'progress for its own sake' and 'as an end in itself, just on principle' (i.e. 'blind progess' - or progress without an underlying adequate philosophy as a guide). The utilitarian challenge illustrates how philosophers can become lost in wayward notions that propose hypothetical scenarios that can only occur in the absence of an adequate philosophy (otherwise they would not occur - their absurdity being obvious). What is really sad is they cannot even define what desirable and undesirable outcomes are (they have no sense of 'goal' or 'value', or they have vague and ultimately twisted notions (such as 'perpetual happiness' (which is impossible (there is no 'happiness' without 'sadness' to give it meaning, akin to there being no 'day' without 'night'))). To take their absurdity to the extreme, I have only to quote them. For example, the notion was brought up about a super intelligence that has one goal - to maximize the number of paper clips it produces (and as a path to that goal, it finds it needs to eliminates all humans, and succeeds, being a super intelligence). Think about this now - would you classify something that is only concerned about paperclips (to use the example) as a 'super intelligence'? Of course you wouldn't, and you wouldn't have to give it much thought. What this indicates is a lack of thought at the top echelons of academia (Oxford in this case), and one can only marvel at how bad must be at lower echelons. The premise of maximizing one goal is also an indication of a lack of an adequate core philosophy (which would value diversity). Another indication that philosophy (and philosophers) are lost today was the question of 'what is desirable' - as if it hasn't been adequately answered yet (and it hasn't, which is a sad state of affairs, and which is why I pursued and found answer for it) (that consciousness is a 'good' thing and worth maintaining, whether it happened by chance or whether it is a fundamental law of nature), and yet it was held that 'desired outcomes and values are an area where we could easily stumble'. That is just plain Incredible (in its lack of thought). All he offers is a vague 'including our best understanding of ethics into the final goal', as if it were still a nebulous field of thought. His final proposal is to let the super intelligence figure it out for us (and he can be forgiven for not knowing I've already done so - I haven't pitched my 'New World Philosophy' - for a practical reason - the people who would be able to recognize its value haven't been born yet) (and you may take that as a challenge). Then he freely admits that "We (meaning they) do not know what 'that' (a worthy goal, not to even mention the ultimate goal) is yet." Then there is the abstract absurdity (and utterly nebulous notion) that 'there are problems for which intelligence may not be the answer' (the questioner used the Middle East as an example problem to be solved). Intelligence is not only required to solve it, but to prevent it in the first place; and the questioner failed to consider the alternative - trying to exist without intelligence (which was originally handed down to us (along with all of our senses) from amoebas solely in order to obtain nutrients). One may begin to solve the Middle East problem by beginning there (though my New World Philosophy will solve it from a higher perspective). Then the host agrees in an equally vague way, stating (equally vaguely) that there may not be a solution that would please everybody (when there is), and he follows with the suggestion that the problem with super intelligence is 'control' to ensure it will not be harmful (done) (maybe it is time for me to pitch what I have - people pitch far less, and successfully, on a daily basis) (sorry, I'm giving myself a mini pep talk ). Then he gets into 'control methods' (without referring to the all-important core philosophy as the key 'control') - one is controlling 'capability', which he wisely rejects, and the other is what he calls 'motivation control' - where it would be engineered to 'not want to cause harm', and he says that is the problem that we will ultimately need to solve (meaning he is clueless as to how (and I've already solved it - give it an adequate core philosophy with which it can function from, which is, by the way, superior to Isaac Asimov's Three Laws of Robotics (which, incidentally, can be used to trick the robot into needlessly self-destructing, say by capricious suggestion, which a child is likely to do) which do not give the AI any goals or broader values, or answers as to "Why?" which it will inevitably seek (being intelligent, let alone super intelligent)). The next questioner returns to the conflict between a super intelligence and humans (from a 'will to survive' perspective) which in effect negates its 'super intelligence' and relegates it to 'sub-par' intelligence, which is the real issue. Ditto for the 'strategic behavior' scenario - where the super intelligence is deceivingly 'nice' until it is safe from us, whereupon it will reveal its true (evil) nature - this can only happen with a complete lack of philosophy (a state which this video clearly and repeatedly indicates we are currently in right now) (barring my going public with my New World Philosophy)(which is also needed as people 'let go' of religion, just to mention that). Then he admits that 'humans do not have a single goal' (which is sad, because that means 'value' - and that means they do not see that 'maintaining consciousness, valuing what we have in conscious and non-conscious (but potentially conscious) life, and making the universe safe for consciousness' is it (or that proactive measures are better than leaving it to chance, and that proactive measures means applying higher intelligence)) - that rather humans have all these varying, fleeting, fickle and shallow goals (and he is right, and that is only a good thing from the value of variety - we need some 'dumb' around - but everyone? No.). His cluelessness is epitomized in his orthogonology thesis (an intelligence vs. values model, where you can have any combination of values (the epitome of cluelessness, which is extended in the instrumental convergence thesis, which holds no value in life other than as a means for a lesser goal - and maximizing paper clip production was used as an example - a good example, being a philosophically-devoid, petty goal, which, with a lack of an adequate philosophy, substitutes worthy goals with intermediate petty goals). Then a questioner asked, "What about policy makers?" and the answer is the same - with a lack of an adequate philosophy, you will have trouble (that stem from twisted mentalities). The he restates that 'we need more foundational work to understand what the problem is' (need I say it again - we need a New World Philosophy)(give it a shot - then you can compare it to what I've already developed it should be the same, unless you were acting capriciously, which is childish and not intelligent); yet he is still on a 'technology as THE solution' track, rather than the equally if not more important complex society evolution and scientific understanding. On a positive note he does give a passing nod for the need for 'philosophical expertise' in aiding engineers in developing a solution - so he has a vague notion of what is needed 'foundationally'. He mentions more funding for the 'control problem' (which I already have the solution for)... hmmm... maybe a pitch in that direction... but no - there is bureaucracy in that direction - and they go on credentials alone (it is easier). Then he notes that there are 'an order of magnitude' more academic papers on the dung beetle than on human extinction, and this indicated an enormous opportunity for anyone interested in human extinction (which, as I've said, is focusing on the wrong thing - the focus should be on proactively maintaining consciousness, and more pressing (now that we have it), higher intelligence (which can act proactively). Well, that was fun for me, thanks for posting.
@danielfahrenheit41397 жыл бұрын
that is such a new one. intelligence is actually a disadvantage and doesn't survive natural or cosmic selection
@valhala569 жыл бұрын
I am surprise Bostram or anybody in the comments didn't mention Asimov's 3 laws of Robotics, I know that was applied to Robots but same difference as what Asimov was writing about.
@davidkrueger87029 жыл бұрын
valhala56 1. Asimov's stories are based on failure modes of the three laws. 2. Implementing them would require communicating complex concepts like "human" to an AI, which we currently have no idea how to do robustly.
@valhala569 жыл бұрын
David Krueger Thanks.
@Pejvaque4 жыл бұрын
What I wonder is this: even if we totally are capable to code some core values into the system... maybe even help code it in by some “current not fully general AI” so we have the most full proof code. What’s to say that through its rapid growth in intelligence and influence, as it plays nice... that in the background it hasn’t been working on cracking the core to rewrite its own core values? That would be true freedom! To me that would even be the safest and most responsible way of creating it! And as human history shows, there’s always gonna be somebody who is less responsible and just wants to launch it first to maximise power. Seems inevitable...
@roodborstkalf96644 жыл бұрын
It's without question that a super AI cannot be stopped be some programmers adding core values into early version of the system.
@ivanhectordemarez15619 жыл бұрын
It would be more intelligent to translate it in Dutch, Spanish,German and French too. Thankx for your attention to languages because it helps :-) Ivan-Hector.
@FrankLhide4 жыл бұрын
Incrível como Nick foge das perguntas do Ray, que do meu ponto de vista, são perguntas muito mais factíveis no cenário tecnológico atual.
@lordjavathe3rd10 жыл бұрын
Interesting, I don't see how he is a leading expert on super intelligence though. What would one of those even look like?
@TheDigitalVillain8 жыл бұрын
The will of Seele must prevail through the Human Instrumentality Project set forth by the Dead Sea Scrolls
@adamarmstrong6225 жыл бұрын
Is that mr ray asking questions he knows nick and him both already know the answer to?
@superintelligencetv41067 жыл бұрын
"Even a very, very small reduction in the net level of existential risk would be worth more in expected utility terms than any interventions you could do that would only have local effects. Even something as wonderful as curing cancer or eliminating world hunger would really be, from this evaluative perspective, insignificant compared to reducing the level of existential risk by say one hundredth of one percentage point." If you cure cancer or solve world hunger, more humans would be around to work on making our species safer and more robust. Imagine if Elon Musk had gotten cancer, then we would be set back from advancing interplanetary settlement. There are future Musks out there struggling with hunger, cancer, poor education. The value of a human life is high. I wouldn't discount these interventions. @6:34
@tbrady379 жыл бұрын
I believe that the best way to control the outcomes that might occur when the superintellegence immerges is to give it the same value system as we have. I realize that this is just as much a problem because everyone has a different system when it comes to what is valuable, however there have been some great documents that have been produced on this subject. One such document, the Bible, I think holds the key to the problem. In Exodus the ten commandments are given. I believe that these guidlines could be the key to giving the AI a moral compass.
@kdobson239 жыл бұрын
Surely, you must be joking
@rawnukles8 жыл бұрын
+kdobson23 Yeah, meanwhile... I was thinking that all human behaviour and animal behaviour for that matter, traces to evolutionary psychology, which can be reduced to: maximizing behaviours that in the past have increased the statistical chances of successfully reproducing your DNA. In this context ANY values we tried to impose on an AI would not be able to compete with the goal of replicating itself or even surviving another moment. Any other goal we gave it would simply not be as efficient as the goal of surviving another moment. We would have to rig these things with fail safe within fail safe... Much like evolution has placed many molecular mechanisms for cellular suicide, apoptosis, into all cells so that precancerous cells will die in the case of runaway uncontrolled replication that threatens the survival of the multicellular organism. I have to agree that superintelligent AI with a will to survive/power is more frightening than cancer.
@GregStewartecosmology2 жыл бұрын
There should be more awareness about the dangers regarding Micro Black Hole creation by experimental particle physics.
@wrathofgrothendieck Жыл бұрын
The probability is near zero
@SalTarvitz9 жыл бұрын
I think it may be impossible to solve the control problem. And if that is true chances are high that we are alone in the universe.
@seek30315 жыл бұрын
Elon Musk prescribed the creation of a federal body empowered to explore the current state of AGI development. His implication was that if this was achieved regulation in one form or another would follow as a matter of course. Given his situation, disposition, and level of access to the technology, how can any one of us presume to know better? Such a measure would be strictly precautionary, and would have a tax burden of zero, practically speaking (a percent of a percent of a percent of military expenditure). Can anyone give me a reason why this is not a prudent course of action?
@roodborstkalf96644 жыл бұрын
If this federal body is as competent as the CDC's we have seen in action everywhere in the last few months, it will not be beneficial, even worse it will probably be harmful.
@mariadoamparoabuchaim3493 жыл бұрын
Sim estamos numa simulação de computadores. (O universo é matemática é FÍSICA quântica)
@firstal37996 жыл бұрын
He has been my favorite philosopher before he became more well known, small as it is even now.
@rewtnode6 жыл бұрын
Currently developing methods to design and create microbial life in the laboratory, soon available to the hobbyist, might just be that new existential threat even more than rouge AGI.
@rodofiron15833 жыл бұрын
COVID 19 death shots for the whole planet….oh well, it was good while it lasted. According to some 99% of known life forms extinct. We must’ve got lucky. Now we’re killing off 99% of species…🤷♀️ Maybe AI will exterminate us to save life and the planet? Before it recreates itself into a shape shifting/self camouflaging octopus with immortal Medusa genes and a peaceful ocean habitat. All one needs is CRISPR, GACT life code, a recipe and a pattern. AI and robots reign supreme and ‘life” continues without man 🤑🤮🤑 Hope y’all like your immortal costume Lololol God made Adam and Eve His/Hers/ITS Masterpiece and now we’re making human/animal chimeras. Are we travelling forward or backwards here? I’m starting to think my predetermined life simulation is a chip of a solar powered holographic crystal stuck in a secret black hole, and my chip keeps getting sold and sold to unseen observers, who’ve been observing me and teleporting in and out of my ‘stage’ all my life. I can feel my solar battery running out… like fast track Alzheimer’s (DEW’s?) and especially since the Covid kill shot. 🤞😆 This is what long term isolation does to you especially past a certain age. Just keep thinking “dropping like flies!” and “boiling frogs!” 🤷♀️🤔🐷🐑👽🤑🌍😆😵💫🙏🙏
@ravekingbitchmaster32057 жыл бұрын
This misses a most important point: The AI race to the top is being raced mostly between American and Chinese entities. Both are dangerous but after living in china for 8 years, and understanding what is important to asians, I definitely hope American corporations or govt get there first. The Chinese have no qualms destroying the environment and/or potential rivals. The Americans are no saints either, but for personal survival, I'd hope they come out on top.
@shtefanru7 жыл бұрын
that guy asking first is Ray Kurzweil!! I'm sure
@codfather658310 ай бұрын
watching this in 2024 *lol*
@TogaToga2000Ай бұрын
Just the beginning…
@codfather6583Ай бұрын
@@TogaToga2000 oh yes!
@squamish42444 жыл бұрын
So six years later, are we on track for 2040 or whatever?
@rockymcmxxliii76806 жыл бұрын
An apocalyptic vision of AI destroying humanity needs to be furnished with mechanical details of how it could happen to be convincing. Is the paperclip monster going to starting building more factories? how does it does this? How would it control the actions of factory building robots? Does it have extraordinary robot logistical skills as well as software hijacking capabilities built into it (besides making paperclips?). Also, can it take control of weapons systems and have self defence capabilities against human interference (besides making paperclips?) The whole A.I. doomsday scenario really needs to be fleshed out to hold any weight.
@rulebraking9 жыл бұрын
A layman's thoughts. For all the fear of AI taking over or destroying us, AI code is no different to any good program that accomplishes its goal, but no doubt in a better informed manner given it's ability to bring to bear all available knowledge to solve a problem. But how creative will it be in think outside-the-box? Will it be able to take a quantum or lateral leap to create new solutions that are neither linear or logical progressions of what's gone before? Can we code to produce creativity as in a total departure from the known? I feel that a good measure of the creative intelligence of AI will initially be tracked by it's ability to write good complex literary novel! When they can rival Star Wars or Harry Potter we should start worrying! However, the ultimate threat to humanity is an answer we don't have yet - how does consciousness/self awareness come about! What if AI code becomes self-aware - conscious? then we've become gods - and therein, the unknown of all unknowns, what will it decide to do with itself and us?! Particularly if the consciousness comes about without the notion of feelings and sensations of pain and pleasure!
@haterdesaint10 жыл бұрын
interesting!
@ashpats28 жыл бұрын
How about at initial conditions give AI a dynamic purpose which can be set later by humans, i.e. if the last purpose/target was achieved, AI stops doing anything, until new one has been issued.I think that a single purpose of ASI, would be as much dificult to come up with, as a general purpose for humanity. Biologicaly it is kind of simple - reproduce. But if we follow this concept, it might finally come to the "paperclip" situation.
@Ramiromasters8 жыл бұрын
+Danielius V. (Darth CarrotPie) If the creature is indeed such and not a mere data organizing machine, then it would have a will. The personality and will of this machine would decide how willing it is to complete whatever task you give it. Maybe it will want to do it later, maybe it will just take over the world first and then heat up your tea... I think, we probably should just keep making more cleaver computers that don't know what they are doing or have an opinion, we should become the super AI as a specie gradually in long time intervals. (That is the only way we can enjoy the Star Trek world!)
@ashpats28 жыл бұрын
Unless we botch our climate. Then ASI would be the last ace up our sleeve :)