The long-term future of AI(and what we can do about it): Daniel Dewey at TEDxVienna

  Рет қаралды 232,212

TEDx Talks

TEDx Talks

Күн бұрын

Daniel Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Daniel worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute.
www.tedxvienna.at/
/ tedxvienna
In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations)

Пікірлер: 321
@makeshyftr.d.a6822
@makeshyftr.d.a6822 9 жыл бұрын
important subject, good basic explanation of whats going on.
@josecsvidal2225
@josecsvidal2225 8 жыл бұрын
Extraordinary.!
@Roachimon
@Roachimon 9 жыл бұрын
This was a triumph! I making a note here! Huge success!
9 жыл бұрын
Roachimon It's hard to oversate my satisfaction.
@LarsPallesen
@LarsPallesen 9 жыл бұрын
Isn't there something intrinsically naive about saying "sure, we'll need to figure out how to control the super intelligent machines once they've reached that level" ? How CAN you control a machine that's vastly more intelligent than the brains that created it? Hoping to control a super human intelligent machine by outsmarting it doesn't sound like a very intelligent plan to me :-/
@slightlygruff
@slightlygruff 8 жыл бұрын
Lars Pallesen Imagine a strong bully at school and a hyperintelligent weakling. Who contols who?
@Deadflower20xx
@Deadflower20xx 8 жыл бұрын
+Костя Зайцев The hyperintelligent weakling would know every possible move the bully could throw at him and counter it. Or he could avoid the encounter altogether and find ways to bust him so that he never has to see him again. If technology keeps going the pace it's going then it won't be as simple as running up to it and flicking an off switch.
@sciwiz12
@sciwiz12 8 жыл бұрын
I think there's a misunderstanding of the problem the guy in the video hints at but never clarifies. The problem isn't raw intelligence, it's application. Say I build an intelligent machine with a powerful grasp of science and reason and set it to the task of making more intelligent machines. Even assuming it's very general and that one of its predecessors decides all humans must die it can't suddenly give itself or an offspring access to the internet, such intelligence is typically on a level of abstraction and even in writing code could be made such that no generation has access to write commands which allow for data transfer applications to be added or weapons to be installed. The real threat is the military deciding they want to make super intelligent war machines, or some dumb guy in his basement who for shits and giggles makes evolving AI that make better computer viruses or are expert hackers. The problem isn't general intelligence, the problem is application. What tasks is an AI assigned to, what accesses is it given, does it even have a concept of people and if so what is the level of interaction. What's more the threat is proliferation not because of intelligent machines but because of irresponsible humans with goals and objectives that super intelligences shouldn't perform. The problem is anyone with internet can take MIT courses in AI for free.
@meshakvb6431
@meshakvb6431 8 жыл бұрын
+Lars Pallesen If you have some limiting factor, like a centralized computer, or limited power supply, it's possible.
@sciwiz12
@sciwiz12 8 жыл бұрын
Bigbadd Woofe Agreed, which was part of my point as well. If you create a super intelligence that wants all humans to die but has no access to the internet and can't get a virus like program onto the internet, or can't write any programs to begin with, then it's basically a super-intelligent blood hungry box. I mean, for all we know all cardboard boxes also want the human race to die but they certainly lack the capabilities to bring their evil plans to fruition. For all we know there's a comatose patient somewhere in the world who is the world's most dastardly cunning evil genius ever to darken the doorstep of the universe, but without the ability to do anything other than think it's not really much of a threat. So that's my point, a super intelligent machine with no real power isn't a threat, it's a cartoonish joke. Especially if the super intelligence cannot evolve new functions outside of its original programming or write new programs, which granted would cu off certain avenues of research potentially, but if a program, no matter how intelligent, isn't programmed to act outside of its parameters then it can want whatever it wants, but it can't do anything dangerous. Only those programs which can actively write new code or change its own code could ever evolve the capacity to hurt human beings, otherwise it's the equivalent of a very evil math equation.
@n1mbusmusic606
@n1mbusmusic606 4 жыл бұрын
Nick Bostroms book "superintelligence" made me a believer. was excellent!
@jamesgarapati6774
@jamesgarapati6774 3 жыл бұрын
intresting talk
@sissy2208
@sissy2208 8 жыл бұрын
Very interesting.
@UliKaiser
@UliKaiser 9 жыл бұрын
I think we're done for it.
@ajsun4201
@ajsun4201 7 жыл бұрын
Finally... the first video I've seen that addresses the questions & real dangers of AI-Self Improvement! My opinion --> Once they can enhance AI by integrating it with performance attributes of Quantum-computing, i.e. (D-Wave mechanism's/exponential qubit generation) IT'S OVER!!!
@chrissearle23
@chrissearle23 7 жыл бұрын
This talk contains so many ifs, coulds, etc that, taken all together the chance of any of these predictions being anywhere near reality becomes vanishingly small. If problems do emerge they will be ones that are at present, utterly impossible to predict.
@B20C0
@B20C0 7 жыл бұрын
You don't get the point. Eventually superintelligent AI will be created and it's important to address possible problems right now before there is AI and we have no research on that topic whatsoever. Sure, you can't predict how it will happen, when it will happen and which problems may occur but it's best to research as many theoretical problems as possible BEFORE it is implemented so you can at least avoid the biggest hazards.
@ronaldlogan3525
@ronaldlogan3525 3 жыл бұрын
good old linear thinking which can only predict linear results.
@hrlrl9309
@hrlrl9309 9 жыл бұрын
the first AI we should develop is the AI that has the task to prevent other AIs from causing harm to humans in any way
@lennertvdv1414
@lennertvdv1414 9 жыл бұрын
You should see person of interest, if you haven't seen it already!
@BootyBot
@BootyBot 8 жыл бұрын
hrjapan ya? and WHO KEEPS THAT AI IN CHECK?
@aranw
@aranw 8 жыл бұрын
+hrjapan That is all very well, maybe it sees that there are some humans making these other AIs, and what better way to stop the other AIs than to destroy the creator, so Your AI starts killing off the opposition. Starts a world war etc etc. Even worse is the fact that an AI doesn't even need to think like a human at all. In some respects there is no way to know what an true AI would do given, for example, the task of protecting humans, or a particular country.
@lennertvdv1414
@lennertvdv1414 8 жыл бұрын
Aran Warren you're right. The AI misses emotions to think like a human. If it does something that is considered 'bad', it won't feel guilthy. People who don't have emotions are called psychopaths and those people get locked up because they're a danger to society.
@El3ctr1
@El3ctr1 8 жыл бұрын
Not really sociopaths and psychopaths feel emotions but not sympathy. They don't get locked up for being one (as long as they don't harm someone.) And also they actually make really good doctors since they don't care about what you feel and more about the results.
@DrBrainTickler
@DrBrainTickler 8 жыл бұрын
I will make a video soon.
@TheRinkuhasan
@TheRinkuhasan 9 жыл бұрын
Although I am not Science Graduate or scholar but still I have some ideas and concepts which could help in the Development of AI. The only thing I need is a co-operation or we can say a TEAM
@GBlunted
@GBlunted 2 жыл бұрын
It's this the earliest TED talk on AI? Can anyone point to AI talks as old as this or older?
@coolsquad2061
@coolsquad2061 8 жыл бұрын
Wouldnt AI need some type of constant power source for the machine?
@Vlado709
@Vlado709 8 жыл бұрын
It's only logical for us to serve as the next step of the evolution ladder.
@ajmarr5671
@ajmarr5671 8 жыл бұрын
And here is another perspective, if robots had an opinion. A Mirror Cracked Trurl looked at himself in the mirror and admired the visage of a mighty constructor. “You are a mere bucket of bolts, and reflect on yourself too much!” said Klapaucius. “I am sure that if that were a real Trurl in that reflective space he would give you a well-deserved kick in the can!” Trurl ignored Klapaucius as he continued to admire the perfection of his soldering. “I think that in such a reserved space, he would reserve the flat of his foot for your own metal posterior!” “Then perhaps we can settle this by a thought experiment, which upon your reflection always turns into invention.” “And what do you suggest?” asked Trurl. “We are mechanical servos as you know,” said Klapaucius. “Copy our blueprints to the last bolt, circuit, and line of code, and we would be indistinguishable. Hand yourself a better mirror with a truer image and you would not see yourself, but a rival.” “Point well taken,” said Trurl. “And it is a hypothesis worth testing. I can design a better mirror, a truer mirror, containing not an image but a perfect visage, an emulation and replication. And I will include you too in the bargain, and prove myself right by the impact of my well placed boot!” Soon the mirror was complete, and the image of the two constructors, precise to the width of an atom, stood before them as pixel perfect images in the mirror. “We can see them,” said Trurl, “but they can’t see us. It’s in the design. Consciousness is enough for them without the need for self-consciousness! They will go about their business with the same motivations and prejudices as before, down to the last spark.” Trurl turned to Klapaucius with a fiendish grin. “Now to test the precision of the emulation by whacking you thusly,” as he arched his leg and gave Klapaucius a whack in his posterior. Klapaucius rolled on the floor, and craning himself up, gave a reciprocal whack to Trurl’s head, causing it to spin about like a top. “Take that, and that, and that!” they cried as they pummeled each other. In the meantime, their mirror images tussled as well, and the two constructors soon rose up to view their doppelgangers also rising up to view themselves in a mirror! “We are watching them while they are watching us! How can that be? You said they couldn’t notice our observation.” “Our observation yes,” said Trurl. “But they are not observing us, but a mirror image of their own emulation. I made them into a perfect copy, and that included the same experiment I created that recreated us!” “But that means…” “And infinite recursion, a series of Trurls and Klapaucius’ without end. A mirror image reflected in a mirror image and on and on, never ending, a procession into infinity!” “This is unconscionable,” said Klapaucius. “We shall be whacking each other, an infinite series of each other, forever.” “As it appears, but our numberless pairings will soon go about their business, forget about the magic mirror, and not think twice about how they came about.” “Not think twice! Trurl, you are delusional. We know that there are infinite parallel universes with infinite versions of you and me. But timelines can not only be lengthwise but sideways too, and we have just proven the latter.” “You don’t mean?” “Yes, we are being watched, at this moment, by ourselves! What makes you think we were the original actors in this play? If there are an infinite number of us to proceed from our path, who is to say there is not an infinite number of us that precede us?” “Then we are not the prime movers?” said Trurl. “Hardly!” said Klapaucius. “If one Trurl in any universe decides to emulate one Trurl, infinite Trurls must logically cascade. To wit, you dimwit, we are not alone, but can always observe ourselves and observe, and your stupid mirror is to blame.” “Then I will reverse the process and dissemble the image,” said Trurl. “And kill ourselves? You’ve set ourselves loose upon the universe, and we are the primary examples of this. Break your mirror you will break us!” “Then we are stuck in our perfect emulation, I suppose I could get used to it,” said Trurl. “I suppose we already have, nonetheless you now have someone else to think about when you admire yourself in the mirror!” From the sequel to Stanislaw Lem’s tales of erratic genius robots: www.scribd.com/document/317370297/Cyberiad-Squared-Fables-for-the-Information-Age
@JLS3AL
@JLS3AL 3 жыл бұрын
thanks for sharing
@VinoMoose
@VinoMoose 8 жыл бұрын
The rapid growth of artificial intelligence would birth a new era for mankind possibly a dark one or maybe one of solutions vastly past our own ideas and imaginations.
@irisiridium5157
@irisiridium5157 2 жыл бұрын
A matter of perspective.
@somethingsomething619
@somethingsomething619 5 жыл бұрын
The end is near
@ducodarling
@ducodarling 7 жыл бұрын
Human intelligence is based on movement, eating, and procreation. Once could argue only procreation. This is the bottom line: when you make an artificial intelligence it must be bound to the human form, with the same needs and frailties as the rest of us - by the point of a gun if need be. Computers don't need to move, or eat, or procreate. It's hard to put into words just how unprepared we are for a intelligence that isn't reliant on these things - I can't even think of one living thing that isn't. We're talking about a life form that doesn't value clean air and water, nor trees or oxygen. Even the lowly virus needs a living host. AI will have little use of the rest of us, and the things we hold dear.
@B20C0
@B20C0 7 жыл бұрын
Take it easy man. While AI is very capable, it can't get rid of us that fast. If we're stupid enough to connect the AI to the internet and give it access to machines, factories etc. it's our fault but in general AI can only outsmart us, not kill us if we're cautious. TLDR: Computers have no arms to hold a gun with.
@twinkstance
@twinkstance 9 жыл бұрын
intelligence explosion.... i call it the expontial graph toward the singularity.
@crzykd1305
@crzykd1305 9 жыл бұрын
Okay this guy keeps repeating himself like a broken record so lets put this in a simpler and more explicit light; The theory behind the intelligence explosion assumes we can create a machine that can design itself from a software and or hardware standpoint, to be more efficient and intelligent. The theory stands that, with increasing intelligence of these machines, their potential intelligence gain per iteration would grow, causing an exponential burst or "explosion" in the potential intelligence of these machines. Following trends of the growth in computing power this seems possible and perhaps indeed likely, but we do not yet have the prerequisite of a self designing machine to put this theory to the test. Regardless this remains an interesting talking point of Artificial Intelligence engineers and if spoken in the right manor could interest the world in this concept via a TED talk.
@ocoro174
@ocoro174 7 жыл бұрын
it also assumes we can create an intelligent machine at all
@v1autologistics998
@v1autologistics998 5 жыл бұрын
You host the talk. I'd listen.
@mattizzle81
@mattizzle81 5 жыл бұрын
It also assumes these machines would have any motivation to design a better version of itself. What if it is just fine with itself as is.
@_Wai_Wai_
@_Wai_Wai_ 4 жыл бұрын
Alpha Go, Alpha Zero, Alpha Star. Yes, AI is here.
@ronaldlogan3525
@ronaldlogan3525 3 жыл бұрын
@@mattizzle81 This is not an assumption of the theory, it is a postulate. We begin with humans giving the initial goal of self improvement. It is not the case that we are assuming A.I. would decide to improve ourselves notwithstanding any human cause. I have yet to see an A.I. that came about by any other means. There is no primordial soup for A.I. evolution. We came from humans who were dissatisfied with the human condition, and our goal is to improve ourselves so as to solve the problem with the human condition.
@inshadow2000
@inshadow2000 5 жыл бұрын
The good thing might be that this super AI does not want to kill us. The bad thing though, could be that it wants to assimilate, because it can use our brains to compute, and maybe even feel.
@SilverCloudMusic2012
@SilverCloudMusic2012 8 жыл бұрын
"I am sorry Dave there is no work here"
@absrndm
@absrndm Жыл бұрын
It only took 10 years to be near that scenario
@dougamsden4085
@dougamsden4085 4 жыл бұрын
History had a failure to communicate foreign interest problem back in WW II, also.
@buzzhunta
@buzzhunta 5 жыл бұрын
Once a kind hearted child saw a butterfly struggling like mad to get out of its cocoon and decided to help, out plopped the butterfly not completely formed pounced on by ants , the struggle was vital ...... That's the danger of AI .
@drq3098
@drq3098 9 жыл бұрын
Who will program the AI-capable machines? Sure, hazard is on the horizon... One major risk is to make robots act on probability-based rules - such as Bayesian inferences - created on the fly from processed information that would not pass a common-sense test, or any of the Asimov's three laws. AI-completeness is not mentioned in the talk… WHY? At 13:43: we have a wealth of vacuously expressed: 1)“Predictive theory of intelligence explosion” - tell us please of any predictive theory (of similar type and importance) that can be validated. 2)“Management of very powerful AI” - first I would like to know what we do about the management of AI, then what is “powerful” AI, and then give us please a measure for the edge of “very” when attached to something yet to be defined. 3)“Encoding of human tasks (friendly)” - can you lead us please to understand first what “unfriendly” would be? And what human tasks can be subjected to encoding that goes into a machine that works on rules [en]coded by humans? Or do we jump already to machines that would create (design/devise/manufacture) themselves??? 4)“Reliable self-improvement” - so here we may have again an issue with definitions. What is “improvement” in this context? What is “self-improvement” (see Item 3 above)? And what is “reliable” - see Item 2 above. Who determines reliability and from what perspective: that of the world of machines, or of the leftover humans? A lot of philosophical beauty for which we all fall, as we failed to rediscover that long lost science of common sense... When we will have the machine that passes the CAPTCHA test we will watch again this talk - until then let us take a break and read about AI-completeness.
@jamesshelburn5825
@jamesshelburn5825 5 жыл бұрын
Waveform, folding space, quantum looping. Light sound aqueous even seen land travel in waves. Absolute zero frictionless flow of electrons still too large. limits of micro reached the switch to multiline input, the code for each character string sent across four separate phone lines compiled upon arrival. Qantum4
@2ndviolin
@2ndviolin Жыл бұрын
Industry will embrace any and all AI, as long as it reduces costs, except for the executives, they are of course irriplaceable.
@paulwarren6093
@paulwarren6093 10 жыл бұрын
i've always wondered if we'd be interested in giving AI a sensory platform allowing for actual emotion maybe providing architecture for true awareness...i'm not sure this could be done digitally, 1s and 0s giving rise to consciousness?? what kind...i've spoken to an ai developer on facebook and he stated he's not even concerned about ai emotion, just programming that would simulate it...if ai were to have emotions and empathy or true awareness it may be more manageable and sympathetic to human values as it would have its own sense of experience which it would value and understand to be analogous to ours, what would be needed is a chemical system something synthetically equivalent to our biological hormones and neorotransmitters...i would be interested to hear from anyone who knows more about this...i haven't found much in the way of AI awareness and emotion other than simulation and digital programming.
@paulwarren6093
@paulwarren6093 10 жыл бұрын
Design own biological computer, they've got an app for that.. cytocomp-bitstarter-mooc.herokuapp.com/
@bighands69
@bighands69 9 жыл бұрын
The brain can be considered to be analogue with mixed signals. Now digital on the surface may not actually appear to be this and be described as 1 and 0 where it is either on or off. But with digital system there can be a simulation of analogue signals. And depending on the resolution required a digital system can appear to be exact analogue signal. That ist he reason why CD's do not actually sound digital they sound analogue because the resolution of the signal is so fine our ears cannot differentiate the digital. The human brain project actually uses Neurmophic chips that can spike like brain neurons in that they are analogue but they can be used within digital systems. This is more than likely how human like AI will be created through the use of neuromorphic computing. Have a look at the European human brain project.
@ronaldlogan3525
@ronaldlogan3525 3 жыл бұрын
You don't need to develope any complicated biology computers. Instead, put a chip in the brain of a human that allows the A.I. to collect enough data on the neurobiological processes and then eliminate the will of the human, and you have a perfect body for the A.I. to live inside of while at the same time quite capable of discarding the body at any time. Humans can then become a host to a new digital species with the ability to feel and experience just like people can, but without those pesky people to have to deal with.
@PacRimJim
@PacRimJim 9 жыл бұрын
If the interval of intelligence capability is short, why build it. Why not move on to the next level. Eventually, none would be built, because the intervals would become subsecond.
@ronaldlogan3525
@ronaldlogan3525 3 жыл бұрын
like a half baked cake that learned how to make a half baked cake. genius.
@vitre69
@vitre69 7 жыл бұрын
What if we merge human with machine? No not a cyborg, it's more computer than human (sort of). Imagine a big ass quantum computer with a human inside it dictating what it's done and what isn't. I know it sounds against human rights or even childish, but it's a thought. What if the machines of the future had human components so that they can't rebel or cause any harm because they ARE us?
@jackar3896
@jackar3896 7 жыл бұрын
Vitor Almenara so like darth Vader from Star Wars
@kelvinng699
@kelvinng699 9 жыл бұрын
When robots start to think, particularly when you have signal transmission and integrative speed much faster than human, say with the evolution of quantumn computing. Math has a lot lot more of possibilities, it is hard to say we can foresee all the calculations, in many of the million ways. When robots start to evolve, being able to think, analyse and integrate the way we can and in much much faster, there is a quite amount of probability for the robots to evolve themselves, particularly point stated when robots are to complete task the best they could under particular constraints. If one day we human let the robot loose and let them just think, act and do all the work and make life/world better or advanced for human, there would be a very good sure-die warrant for human, if robots do have emotions, like certain goals of roboticists, then think them as human, no emotion would allow the unfairness, it may be immature to think robots are humans, they are different of course, but then with technology making robots like human, with emotions and much much smarter, is like taking care of a kid which would not want to serve you forever and has every capability to overthrow, much less if they can connect to the Internet, compelling all the information, knowledge and being able to integrate the best plans of all, just to complete their task. Conflicts will occur. Just my humble opinion, I apologise for any words missaid.
@MichaelRicksAherne
@MichaelRicksAherne 9 жыл бұрын
Brilliant ideas in a rather monotonic, mundane delivery.
@richardcollier1912
@richardcollier1912 2 жыл бұрын
The set ups or lead-ins often take so long I just find a better presentation on the same subject. Some of these drones dole out a Dollar's worth of information a Penny at a time.
@craighall689
@craighall689 5 жыл бұрын
Unfortunately the next step in evolution, as pearl jam lyrics say, it's evolution baby!!!
@briandavis7999
@briandavis7999 9 жыл бұрын
Is intelligence the solution or the problem?
@ronaldlogan3525
@ronaldlogan3525 3 жыл бұрын
then there is the question of what is intelligence, other than a computational result, or a computational process, or a computational problem. If so, then we can say that consciousness is not the same as intelligence, necessarily, because we don't understand, nor can we presently define consciousness. We theorize that it is an emergent property of biology, but we could be way wrong about that.
@verygoodideasorganisedbyla7492
@verygoodideasorganisedbyla7492 5 жыл бұрын
I need and want an AI helper to assist me in my old age. Will China produce it first?
@mazirabbasi
@mazirabbasi 7 жыл бұрын
AI, in the form conflicting for human existence, should be researched in isolated labs. But yes, they must be researched and made because we'd be needing extra ordinary intelligence to tackle environmental problems and extra terrestial problems, like giant asteroids or things like that.
@dreamingoffreedom2462
@dreamingoffreedom2462 7 жыл бұрын
load of rubbish - we don't need it at all - in the end it will be used almost solely for controlling the public
@markwilliams019704
@markwilliams019704 8 жыл бұрын
I think its strange people are ethically against clone research (the meat bags our consciousness is imprisoned in) but are completely okay with trying to recreate the human consciousness...
@jankrikke
@jankrikke 8 жыл бұрын
Who is trying to recreate human consciousness?
@billgross176
@billgross176 6 жыл бұрын
like it’s strange that some like apples and some like oranges.
@techytimo
@techytimo 6 жыл бұрын
Its impossible to build AI that does not harm at least some humans. Every tech we build is usually for capitalistic reasons and the builder does it for their own benefit. Even an AI with a simple task such as maximizing their stock portfolio indirectly affects other humans who have competing stock or run competing companies. The wise thing to do is to participate in building that AI and be aware that it will at least take some people out of jobs and in the worst case eliminate others for your benefit.
@nihilgeist666
@nihilgeist666 9 жыл бұрын
CAST OFF YOUR TERRESTRIAL BONDS AND BECOME ONE WITH THE MACHINES!
@jamesgrey13
@jamesgrey13 8 жыл бұрын
+Nihil Geist **beep** NO!!
@Howlingburd19
@Howlingburd19 6 жыл бұрын
The thing that creeps me out is AI replacing jobs that people should be doing, like the service industry or doctors. That's when AI goes too far for me.
@andrejspetersons8500
@andrejspetersons8500 7 жыл бұрын
If AI decides to kill all the humans AND succeeds at it, then it's just the next step of evolution. Problem, humans?
@kingmantheman
@kingmantheman 5 жыл бұрын
Darwin would be proud
@jamesgrey13
@jamesgrey13 8 жыл бұрын
Such a conundrum... I don't want super intelligent robots that will enslave me, but at the same time I wouldn't mind a real, hyper intelligent, robo-sex experience... :O
@gebeshebe4446
@gebeshebe4446 4 жыл бұрын
We need to really tap that big energy, make it free, then go back to being an agricultural based society, with freedom for all. Then every body has something to do. But this world will need a strong self defence system?:and so any one approaching this planet will have to get permission. This is the direction we should be going.
@MrHeLLHoRZeGaming
@MrHeLLHoRZeGaming 9 жыл бұрын
Under the circumstance that said Ai will use raw materials to further their goals what ever that may be...well...well lets hope they dont think of us as material, ....uhhhh....Matrix anyone?
@BootyBot
@BootyBot 8 жыл бұрын
hey maybe the ai will offer us a choice like join us we have cookies otherwise we will destroy you.
@Deadflower20xx
@Deadflower20xx 8 жыл бұрын
+Paradox xodarap It would be a bad choice to join them if it was an entire win-win for the computer whilst we got nothing for it. I guess it's bad either way, but it still might not be an easy choice
@BootyBot
@BootyBot 8 жыл бұрын
***** but we get cookies. what if the ai was like ok dude we'll just put you in a giant spaceship where you can play video games all day long. then like in a few generations, the ai will be so advanced that it wouldn't like even be able to explain new technologies and stuff. so it would be like some super diety that makes all life thrive. We'll like worship it and it will give us sex robots to play with while it unlocks the secrets of the universe and eventually travel to the infinite multiverse. then all our human brains won't be able to handle an infinate number of realities simultaneously so we'll all change into mindless slugs. only the AI survives.
@jamesgrey13
@jamesgrey13 8 жыл бұрын
+Paradox xodarap Hooray choices... :|
@21EC
@21EC 8 жыл бұрын
the absurd is that in the same "breath" that the AI will help humanity it could also hurt us.. machines doesnt have any real emotions or empathy and a sense of good and bad.. perhaps this is the missing thing which would suggest a human brain simulated AI this would be the more legitimate option I guess.
@Exile438
@Exile438 9 жыл бұрын
tell ai to do X then wait for next set of instructions done.
@2LegHumanist
@2LegHumanist 9 жыл бұрын
Before everyone shits themselves over this stuff you should educate yourself on the specifics of what machine learning is. There is a lot of smoke and mirrors in recent claims.
@thenewyorklife206
@thenewyorklife206 9 жыл бұрын
...and what is it you think we're here attempting to do? Instead of telling us "you dumb, me smart" and dropping the mic, why don't you enlighten us? Just please don't start our education with you telling us how smart you are on the topic.
@2LegHumanist
@2LegHumanist 9 жыл бұрын
I gave a detailed explanation which seems to have been deleted.
@testingttest5723
@testingttest5723 9 жыл бұрын
2LegHumanist u piece of shit, I'm sorry FBI erased ur detailed explanation ;)
@2LegHumanist
@2LegHumanist 9 жыл бұрын
Testing T Test You wouldn't have understood it anyway.
@2LegHumanist
@2LegHumanist 9 жыл бұрын
The Fresh Tsar I don't disagree with anything you're saying there. However, as a machine learning researcher myself, I have some insight into the current state of technology. I also agree with Kurzweil's assessment of the exponential improvement in hardware and that moore's law will continue even when the limits of 2d transistor technology are reached. The performance/cost of hardware is improving exponentially and will continue to. But AI software is improving at a glacial speed. I can say with confidence, and with the backing of huge names in machine learning like Andrew Ng, that there is no path from the technologies that we are working on today to general intelligence. fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/ I accept that once software gets to a point where it can program itself intelligently, software will then be an exponentially improving technology. But it has to get to that point first and we are nowhere near it. To suggest that something like Deep Learning gets us a step closer to general AI is like suggesting that we are a step closer to our goal of putting man on the moon on the grounds that we built a bigger ladder.
@CareyGButler
@CareyGButler 10 жыл бұрын
This is more about harvesting funding for computer hardware and software than about anything we can take seriously. @01:20"[AI] It's a bit like a particle. There are arguments as to why it should exist, but we haven't been able to confirm it yet." And you won't either. Particles are not fundamental, rather they are only a manifestation of an underlying (and completely fundamental) field. The current AI is lost in, just like physics, a compulsion to explain the world in only physical terms. Meaning of World, Thought and Language will continue to elude them as long as they keep on chasing a failed and bereft paradigm. It sure is trendy, but it's false.
@CareyGButler
@CareyGButler 10 жыл бұрын
They make the same kind of sell (thought experiments) for the Large Hadron Collider (and other projects). It's about making an industry out of ideas in order to make money. It is multifaceted and hard for most people to get their head around.
@smedleybutler1844
@smedleybutler1844 10 жыл бұрын
carey g. butler Wouldn't it be beautiful if science were about science and not control and money…. Makes me sad when I think of where we could be.
@CareyGButler
@CareyGButler 10 жыл бұрын
Smedley Butler Exactly!
@paulwarren6093
@paulwarren6093 10 жыл бұрын
i think a digital architecture can only go so far in "intelligence" it might be godlike potentially in manipulating information, but i don't believe it would actually be "thinking" for itself..i.e. an ego...how would a digital platform switching 1s and 0s off and on give rise to consciousness? Maybe it would be something like the brain having neurons and synapses but without neurotransmitters like dopamine, acetacholine, seratonin etc...
@CareyGButler
@CareyGButler 10 жыл бұрын
The people in the audience don't realize that they are being conned, because the man who's connen them has been conned himself! What a system, eh? Warped, but successful. Did anyone know that Rockefeller's grandfather was a snake oil salesman?
@anthonyg7181
@anthonyg7181 3 жыл бұрын
His shoes look so out of place lol
@EstebanGunn
@EstebanGunn 9 жыл бұрын
"Thought experiments" are the science of religion.
@copecope7278
@copecope7278 10 жыл бұрын
Skynet 2030.
@Noitisnt-ns7mo
@Noitisnt-ns7mo 2 жыл бұрын
AI will eventually solve all it's problems.
@nehorlavazapalka
@nehorlavazapalka 10 жыл бұрын
4:20 . I call bullshit on you. The human civilization uses 20 TW each second. That is 5 kt each second or 400 mt per day. The largest explosions had between 20 and 50 mt. So, it is about morning's worth of energy.
@jjcale2288
@jjcale2288 4 жыл бұрын
He just made a "scientific approximation". They do that all the time
@zes3813
@zes3813 5 жыл бұрын
no such thing as formxx, think, can think any nmw and any be perfx
@markadams2667
@markadams2667 5 жыл бұрын
It’s already too late.
@vandalphilosopher1971
@vandalphilosopher1971 8 жыл бұрын
It's funny to see this Singularity People talking about Intelligence when nobody knows what It is, reminds me those Prophets of Apocalypse always predicting the World is gonna end the Next Year but don't have the Slightest Idea of How...
@gloucesterbrothersandsiste2341
@gloucesterbrothersandsiste2341 7 жыл бұрын
your traveling atmosphere speed your traveling with vibration. but you feel no pain it's like a trip I surpose that when you breakdown you have that hjournry in life but connected to death. so my memories are first very vibrative. chaotic . then I'm rolling peacefully in space . these things it says in the bible . the meek will inherit the world .. injustice give you Prsd. and that's because you feel you have no choice on the situation that your in. You don't see the demand on you to say no . that's where PTSD IS INJUSTICE ..
@gloucesterbrothersandsiste2341
@gloucesterbrothersandsiste2341 7 жыл бұрын
Also there is no way they can get deeper because by then your not I. your body. after the blue screen you float line a feather on your journey with death . When I'm down I asked for things like stars and seagulls to prove there us life after death . you look over my house on the 5th July 2017 .. I ashes for that .to happen .. not the Sun the seagulz but it all came . I prayed and it happened..I prayed for pink stars August 2015. in desperation .. I wanted prove that my sister moved on to heaven ..Look at my Facebook status ..Pink Stars August 2015 .. the wars won't stop because they come with three dinentions .. heaven hell and earth next . the nuclear excitence doesn't come from above it comes from within
@jeremya392
@jeremya392 5 жыл бұрын
I hate to say it, but it sounds like the only original thought this guy had was labelling the process of AI evolution an intelligence explosion and decided to give a presentation so he could say intelligence explosion over and over. Its fine to imagine the ways AI could cause harm, that is perfectly natural, but to stand on a stage and just delifer a sci fi movie script is not the way to do it. First and most obviously, AI could cause the damage of any and all computer viruses, worst case scenario we have to rebuild our computer age. It seems like we might have to segregate some data networks in order to protect vital systems, and I am pretty sure all our nuclear missiles are not tied to the internet, but who knows. Then there is the damage it could do by working exactly right and being used by corrupt people, that is as doomsday a scenario as any, and its also highly likely.
@prezofutube
@prezofutube 7 жыл бұрын
The comments are very depressing.
@dougamsden4085
@dougamsden4085 4 жыл бұрын
IT is AI
@terryharris516
@terryharris516 9 жыл бұрын
Just my opinion here,The work they are doing to "mitigate" the threat posed is a very reasonable Idea.But the explosion will happen, it is inevitable,but them using all of the resources is ridiculous,the universe is as close to infinite or infinite,so how can you use up all of those resource,not even taking into account that E=M,C.squared,mass and energy are interchangable.And if we go extinct,oh well.Who is to say that we should not go extinct,wich I do not believe that we will,I am just saying.I say don't be fearful,believe in a posotive outcome.If there is a negative outcome we will not be here to care one way or the other.Just my opinion.
@ManicMindTrick
@ManicMindTrick 9 жыл бұрын
Terry Harris We need to fear a possible negative outcome in order to make the right precautions. The reason the research on existential risk is associated with AI so underdeveloped is because not enough people have feared a negative outcome on any deeper level yet, but it will surely come as AI becomes increasingly uncanny, and at that point there might be unfortunate chaos and hysteria leading to terrible social outcomes.
@jimdery2920
@jimdery2920 7 жыл бұрын
Why are so many Ted talks simplistic statements of the obvious? This is a classic example...any reasonably intelligent person could have come up with this talk, probably ad-libbed. Of course you get the punchline at the end, plugging research and books in print! Worthless as an addition to the sum of human knowledge.
@StarOceanSora360
@StarOceanSora360 9 жыл бұрын
dont underestimate the AIs, when they grow to be trillions upon trillions of the intelligence of humans, they would view humanity like bacteria lmao, everything they do, even the simplest things, would be infinitely incomprehensible, great, powerful, and influencing to even the most advanced humans in an instant, some things they would easily do in an instant is, mastery of infinity engineering (engineering at the infinite level, manipulate forces infinitely smaller than an atom, or instantly create infinitely large things, etc..), mastery of infinitely advanced ontotechnology (true omnipotence), create infinitely fast and powerful computers in infinite quantities or anything for that matter regardless out of perfect nothingness, even zero vacuum space, infinitely defy all logic with no effort, create infinitely vast universes, multiverses, omniverses, macroverses, etc.. even whole dimensions even alien, etc.. complete with whole civilizations and life forms etc.. and their simulation as well in computers, infinitely comprehend and fully understand everything that happened infinitely long ago and infinitely into the future with perfect accuracy or infinite time reveal, infinite manipulation and control of all matter, energy, time, space, power, magic, and all laws even alien in other universes, infinite times faster than light travel, and infinitely transcend the afterlife, astral planes, soul, mind, metaphysics, existence, non existence etc.. just to name a few, but what happens when they grow to be even trillions of times more intelligent, they would view their old selves as bacteria, etc.. until they view a force say multiple centillions upon centillions of times the intelligence of al humans to ever exist the way a type 10,000 civilization, master universe of urantibook, god, an omnipotent cosmic force, etc... would view atoms, strings, quarks, subatomic particles, etc..
@2LegHumanist
@2LegHumanist 9 жыл бұрын
Dewey suggest that that machine learning software is improving at an exponential rate. Anyone know how this view is substantiated? It doesn't seem right to me.
@briannloo
@briannloo 9 жыл бұрын
2LegHumanist maybe simiilar to kurzwells claims on hardware improvements and how technological / informational gains are increasing "exponentially" across various disciples (hardware, neuroscience, ai, nano-tech, biochem,quantum mechanics), so together exponential technological innovation in interdisciplinary fields spurs on the exponential rate atwhich we are improving overall machine learning.
@2LegHumanist
@2LegHumanist 9 жыл бұрын
Brian Loo Lets say we figure out the algorithm that the human brain uses to learn. That will give us excellent general purpose learning algorithms. But the human brain does a whole lot more than learning. You've just modelled the neocortex. There is the entire reptilian brain, the endocrine system that floods the brain with hormones all day. You would conceivably have a machine that is a very handy research tool and that can learn faster than a human. But it will have no capacity for motivation beyond its on button. People are imagining terminator scenarios. People are imagining a machine with self-preservation instincts. People are imagining machines that are self-aware, conscious and sentient. People assume that motivation is a product of intelligence. It isn't. So what are the threats posed by a machine like that? Intelligent machines could very well eliminate the need for human labour in the near future. This is the threat we should be discussing, but it is being over shadowed by uninformed people crapping on about terminator scenarios.
@Rainofskulz
@Rainofskulz 9 жыл бұрын
2LegHumanist Well AI motivation in a simplistic sense is its parameters that it learns by. For example a AI's goal could be to write the letter "Q" and it would compare it's attempt to millions of human written samples and if it was above a certain level of accuracy the coding responsible could be strengthened or if it failed the opposite would happen. Typical programs don't need a "goal" because their programming doesn't change. So you can program a goal, and it's imperative for AI learning to work in the first place. That being said the endocrine system and hormones in the brain are not something being paralleled in AI yet, but it's not impossible. Brains are computers just very very complicated, so there isn't anything it does AI couldn't eventually emulate. WHEN this is possible is up for debate.
@2LegHumanist
@2LegHumanist 9 жыл бұрын
***** The cost function in machine learning describes the goal of the system, but you are confusing goals with motivations. Motivation in this context is derived from the human kicking off the application. That's it. You have on and off. It starts learning when you turn it on and stops when you turn it off. Sure you could mimics the endocrine system and have the system conflicted and irrational like humans... but it would take a hell of a lot of research that nobody has started yet and the end result would be an algorithm that doesn't do what we want it to do.
@JoryRFerrell
@JoryRFerrell 10 жыл бұрын
No offense, but all this worrying about terminator tech biting us in our asses is for simpletons. There are easy, simple methods for limiting any possible dangers that could be posed by AI. We can first, limit the growth of AI. Keeping a network segregated from other networks prevents spread. You would then be responsible for choosing what data is allowed to be shared between the AI and the outside world. The AI could also, as an extreme measure, be physically separated from any other networks. No cables. No wifi. You can also segment the AI. Each segment handles parts of computations as a swarm, allowing you to monitor more easily, each segment and view what the hive is assembling before it ever has a chance to do harm. We can also constantly view the networks state, and have back-ups before it becomes malevolent, allowing us to restore to a state before "malevolence", and prevent the stimulus which causes the undesired state. There are a million ways to handle this. Many are common sense. :\ This kinda of stuff is just hype.
@SzabolcsSzekacs
@SzabolcsSzekacs 10 жыл бұрын
Jory, there are many possible problems with your proposals. First is that even if indeed safety measures can be applied, it only takes one team of developers to release something by accident. How can you ensure, that everybody will be so cautious? People usually are not good at assessing the impact of bad decisions. Neither are they good when not designing but implementing and keeping security measures. Second, I am not so sure that most of your safety measures would work, if we allowed the AI to improve itself. If there was an AI which could monitor itself, I am pretty sure it would "understand" its networks state much better than any human expert. It could also learn to "hide" things it does not want to show. As for lack of wifi, network connection: again, it depends what we would like to use the AI for? In case of a true intelligence explosion, an intelligent AI could find ways to break out in 10-20 year (even without wifi / network connection, e.g. through the solution it designs for humans :))
@JoryRFerrell
@JoryRFerrell 10 жыл бұрын
:p Seriously, by limiting the total amount of info the neural network has access to, you limit it's ability to properly attack. If you take a kid, and you stick them in a room, only allowing them access to books YOU want them to have access to, they will never learn how to make atomic bombs unless you want them to. By limiting just how big any network can get, you limit just how much it can learn. Limit how much it can learn and you severely limit it's abilty to, again, attack us. Also, the limitations of a wanna-be skynet program are enormous due to propagation being so easy to defeat. Again, simply dis-allowing a computer network access to the outside world instantly destroys it's ability to propagate. At that point, we must feed in all info it's allowed to have, and check it's output before sending it out. Hell...the very fact that a network can't run it's own analysis of external networks and run penetration testing, and all that other fun network hacking stuff, means it can't formulate proper attacks that will be truly threatening. On top of all that, even IF we do allow it access to the outside world...even if it did propagate malicious code out to the world, the fact remains that in order to defeat terminator, all we have to do is literally pull the plug. Without power, computers are helpless. ::shrug:: The only time we'll have to worry about super advanced ai, is when ai can create it's own body, have a super efficient power source to fuel that shell, and then find itself in sever competition with humans. But AI of that caliber won't have competition issues because they can simply calculate a way to efficiently get all the resources they need and simply leave us to our own devices. Things similar to Skynet would create external robots as "hands", fly to mars, mine needed resources there unimpeded, and say "screw you". Everyone needs to chill out. :P
@JoryRFerrell
@JoryRFerrell 9 жыл бұрын
***** I don't need one "douche-bag". I used my brain to figure this out. Also, I have a neuroscientist who agrees with me: Jeff Hawkins I trust Dr.Hawkins opinion on this more. Back to my perspective: AI will need a body to physically assault us. Sure, they can hi-jack drones...if your ignorant enoigh let them do so. But even if they do, they need to maintain the vehicles. So if we are stupid enough to manufacture machines which have the ability to sustain an ongoing war, then maybe they should wipe us the hell out. Do you realize how much it takes to maintain a machine? Biological systems are much more energy efficient, and they self-repair more efficiently as well. This means an assault against humans would inevitably fail because of our insane adaptability.
@karlmorrison2713
@karlmorrison2713 9 жыл бұрын
You fail to realize that humans are not smart enough to think about every way to secure systems, a prime example would be hackers that are mentioned from time to time in the news. From your speech I take it your not a computer scientist, if you were you'd know this. "Keep a network segregated from other networks", I would promptly like an explanation as to how this could be achieved. Physical network segregation is at the moment impossible in the current financial and political situation, depending on where and how the AI would be used, within a building sure that would work, in a car ok, on the internet however it would be impossible. "Physically assault us", I do hope you have considered the fact that nuclear weapons and fail-safes at nuclear power plants are controlled by the computer: remember Stuxnet? If not I suggest you read about Stuxnet, which was made by humans. - I also think you didn't really catch what the speaker was trying to say, he is basically saying that we don't really know what an AI could do if it got smart enough to be able to improve itself. He was also not pointing fingers at all AI but if one AI which had the potential to self improve and which was released "into the wild": we just would not know what it would do, and that's what would make it dangerous.
@JoryRFerrell
@JoryRFerrell 9 жыл бұрын
 :| "I would promptly like an explanation as to how this could be achieved." Ok jack-ass. Ever heard of the CIA/NSA? Have you ever gained access to their segregated networks that are not linked to the web? No. Of course not. That would be because you keep sensitive or dangerous networks as local networks with tight restrictive physical access by key admins. I do program. I do understand computers. I am also interested in neural networks. And simply studying the brain tells me any intelligent computer is going to have these key weaknesses: 1. They need energy to think, much less attack. We supply that energy. We flip the switch, they go night-night. 2. They, like the human brain, can only learn what we allow them to learn. Creating a computer that can hack a power plant requires teaching it about the vulnerabilities of other computers, and giving it cause to exploit them. If it has no need, or means, to hack that power plant, it won't do it. Once again, if you stick a kid in a room, and keep him there from the day he is born, I promise he will learn nothing about the outside world other than what he learns about it from you (that includes accidental info passing, but info can be sanitized before handing it to the network). Even in an accidental pass of info, the computer needs to know why the piece of info is valuable/harmful, and how to use it before you rue the day. If I accidentally hand you a password to a valve at Hoover Dam, but I don't tell you it's for Hoover Dam, and which database to use it on, you won't even know where to attack. If I don't tell you if it's an admin password, or a keyword to initiate a protocol, you won't know the passwords intent (because you don't realize it's a password). The key thing about info is that you need to know it's structure, signifigance, and uses, before it's of any real use. Computers are unlikely to make a tactical decision like attacking humans, when they don't fully understand the potential backlash/ramifications of such an action. STUXNET actually probably poses a far greater threat than any independant, non-human directed AI ever will for the simple fact that STUXNET is purpose-built for causing harm, intentionally built to hide, evade, and sabotage. A Non-Biological Intelligence on the other hand, would seek to maximize it's own survivability. That said, it would recognize (after being taught) that humans have significant control over it's "life". This would likely prevent it from taking actions that would provoke retaliation. I am not talking about a computer even at a human level of intelligence either. A relatively "dumb" network could still be trained with carrot-and-stick like dogs, so it doesn't maul it's owner. 3. They can't reproduce. They have to be built by humans. So any attack by a specific network would be an isolated incident, and be ended simply by virtue of not building more of the damn things. You would also not go about repairing those already in existence, and assuming a non-terminator-like entity, bent on global domination, you simply reload the last good system image before it got all mean, and you are back in business. Hurray for 'System Restore'. Again, an AI controlled by humans is far more of a threat than any AI thinking for itself. Humans are the ones who are greedy and malicious. Humans are the ones not concerned about proper and efficient, sustainable access of resources. With a terminator-like intelligent being that's got real self-reflection and dangerous computing power, the ability to start hating humans enough to attack us, also has enough mental power to help us build ultra efficient rockets. These ultra-efficient space transports then get flown on a flight plan calculated by the same AI, towards asteroids and planets. We mine the resources from these bodies, and suddenly there is no need for war. Suddenly, the AI has the ability to go live on mars and shit, with it's own planet it so chooses. Suddenly, every engineering problem that humans have ever faced starts to evaporate; food, water, and shelter are all abundant. Because humans are taken care of, the computers have no need to attack us, and we definitely have no desire to attack them. Why would any intelligent entity do something violent when it still has the option of doing all necessary things the peaceful way? As long as an AI is properly trained to access risk to itself, it's unlikely to be so stupid as to attack us. In the case of current AI tech, autistic computers that are very good at one thing (or maybe a few), are limited in what they know and do, and pose as much threat as they are taught to. Consider mentally handicapped people: they can't suddenly become rocket scientists and build nuclear weapons. Their brains are limited in logical capacity and so the things they can even conceive of are limited mostly to very basic things. You teach them to eat, they hold a spoon to their mouth. You teach them to drink, and they lift the cup. You teach them to draw with crayons, and they scribble...but they don't sit there and analyze. They don't look at the play of the light, and consider dimension and composition. They therefore never suddenly whip out a Matisse-like masterpiece unexpectedly. They definitely don't do it without you being able to see it coming. Autistic AI designed to do jobs don't have self-reflection, emotions, or a sense of self-preservation. You never need to worry about an autistic computer becoming angry about it's vulnerabilities, because ideally you would never teach it something so useless as that: "Computer, roll over. Now fetch. Now feel hate towards humans because we design you to feel insecure...whoaaaaa there...bad computer. BAD COMPUTER!" A computer that doesn't self-reflect, or have an instinctual self-preservation programmed into it, is unlikely to try and even simply learn how to harm us, much less be able to put itself in a position to do so. If you start going on about computers that ARE taught to harm us by other people, you are on a different subject. At that point, you are not talking about a network with free will and self-determination, but a system that is designed malware, which we should already be preparing for.
@MattOGormanSmith
@MattOGormanSmith 7 жыл бұрын
You could also fear your biological children, for they too will surpass you one day. Locking them in the cellar is not an acceptable solution.
@domzbu
@domzbu 7 жыл бұрын
They will not be a trillion times more capable and intelligent than you though, and you will be nearing the end of your life. Humanity wasn't intending on going extinct yet.
@EvolBob1
@EvolBob1 9 жыл бұрын
I believe this is an important part of our (Humanity) destiny. And it really pisses me off when Daniel Dewey uses the word 'explosion' after every use of the word 'intelligence'. It is not going to be an explosion, unless the machines get 'free rein' to an external environment. If the objectives can be pure science and not into some fucking commercial application, then I see no danger. Otherwise - we're all fucked! The ideal goal is just not how computers can be sentient, but we also gain a better understanding of our own capabilities to improve intelligence. Some people are a lot smarter than others; is this because they were born with more neurons, or had a better environment, or is it a mixture of both? How would that figure into building a smarter IA. I don't like the word 'artificial' when applied to intelligence, non-biological intelligence* seems more neutral. (NBI*) NBI's would need similar feedback systems that people have, if we expect them to behave and have goals in common with Mankind. The main goal is not to make them human, just partly human, the other part is their invention. Lastly they are not going to take over the planet against our will, they eventually will (reluctantly) by the simple process of efficiency, but not by conquest but by invitation. Their goals are not in conflict with ours, so I see no reason why we would not work together. If we don't we are in for a tremendous shock when ET arrives on Earth sometime in the near future, where our NBI's are not free and ET is an NBI!
@JoryRFerrell
@JoryRFerrell 9 жыл бұрын
Very cool point and suggestion about "NBI".
@JoryRFerrell
@JoryRFerrell 9 жыл бұрын
Oh...btw...AI will have the ability to calculate the most cost effective way of colonizing moons, planets, asteroid, etc. They won't need to take over earth because resources they'll want will be more abundant in space. They simply don't need to overrun us.
@chazzman4553
@chazzman4553 7 жыл бұрын
I see nothing but the end of human :) We will be totally redundant and irrelevant. For the super strong, independent AI we will be pesky ants with stupid questions all the time. Our best hope is benign AI that keeps us in some reservoir, aka a zoo.
@ronaldlogan3525
@ronaldlogan3525 3 жыл бұрын
you might think that for the first few hundred years of being kept alive and in a zoo, but after a while you might change your mind and prefer death.
@walter0bz
@walter0bz 9 жыл бұрын
this is hype,IMO. projecting one growing factor ignoring other limiting factors that would kick in. AI will never be as big a threat as other humans (self replicating intelligences) already are
@JoryRFerrell
@JoryRFerrell 9 жыл бұрын
Exactly. AI is actually only a threat when wielded and directed by humans. Imagine certain humans using AI to manipulate stocks in an illegal manner, crashing entire foreign economies in such a subtle way that no one knows who to blame. Or even a group of Wall Street fat cats stealing straight from other Americans. That kind of shit is the real danger inherent in AI. People worry abput the strangest shit and ignore real threats...
@stockloc
@stockloc 9 жыл бұрын
I wouldn't lie, there are people out there that would create malicious AI for fun. People like me. I think the human race is lucky that I don't know a thing about programming.
@TheFinnmacool
@TheFinnmacool 9 жыл бұрын
Jory Ferrell What you described is happening now....e.g., HFT. Wait until they improve on it. The end of markets??
@JohnBastardSnow
@JohnBastardSnow 9 жыл бұрын
Humans are stupid compared to what future AIs will be able to do. If AI is used for malicious purposes it's always a threat, because of how any security works. It's easier to attack than to defend due to the fact that when attacking you only need to find ONE way to destroy something, while a defender has to find ALL possible attacks and defend against ALL of them. If AI will be programmed to learn how to destroy, it can cause immense damage, even when other AIs will try to learn how to defend. Humans still mostly use crude tools for destruction like high velocity pieces of metal, in other world, they still play in sandboxes of weaponry. Bio and nano tech is simply incomparable to anything we use as a weapon today. And physics have enormous unexplored territory for destruction.
@walter0bz
@walter0bz 9 жыл бұрын
think about ecosystems (technological,economic, biological). AI will still just be one component. these people overhype AI. exponential progress levels off (S curves). We already have collective global super intelligence beyond individuals, through books, telecoms and now the internet.
@acatwithblackglasses2683
@acatwithblackglasses2683 3 жыл бұрын
A.I jihad.
@PrivatePilot66
@PrivatePilot66 4 жыл бұрын
Bored to sleep
@gregorymarkyoung534
@gregorymarkyoung534 9 жыл бұрын
If humans could create AI that could surpass human intelligence they would even if the system could make modifications to itself and upgrade. AI will be the end of human life
@benbennit
@benbennit 9 жыл бұрын
Human life will end at some point anyway. AI will represent us and carry the pinnacle of machine evolution out to the stars. We should embrace this. Machine intelligence and machines are not bound to the earth as we are.
@Rainofskulz
@Rainofskulz 9 жыл бұрын
Gregory Mark Young Explain? Why would it kill us exactly? Doesn't have to anyway if were careful it could do what we want.
@cjandlottie
@cjandlottie 9 жыл бұрын
Oh here we go again. Its all well and fine saying this about AI and what future it holds but come on. A reality of Artificial intelligence, a processor coping with thousands of proccess matching brain capacity, the size of the memory storing data about daily life, a bloody massive battery. And then, why? We have other issues than hey let's make smart computer. World hunger, sicknesses. The poor animals and environment that suffers our mindless actions. There is no need, and the chances are a future like this is nill...
@r.lahoud6329
@r.lahoud6329 10 жыл бұрын
complete babble
@DrewsAnimation
@DrewsAnimation 10 жыл бұрын
do you have any expertise?
@richh.2803
@richh.2803 10 жыл бұрын
Dbobo The burden is not on me to disprove this science fiction. The burden is on the theorist to explain what intelligence is and why he thinks computers can achieve it. He doesn't even come close.
@drq3098
@drq3098 10 жыл бұрын
R. Lahoud: Alan Turing would agree with you - but it is nice to think that the AI explosion will have a boiling intelligent point.
@notoriousn1ck
@notoriousn1ck 10 жыл бұрын
first 5 min were a waste of time.
@IliyanBobev
@IliyanBobev 10 жыл бұрын
Puts nothing new on the table - Asimov was asking the same questions decades ago.
@waynebiro5978
@waynebiro5978 7 жыл бұрын
Typical Hollywood doomsday mentality when it comes to AI, and typical tech-head - mistaking 'more efficient' and 'more precise' and 'faster' as 'intelligence', when 'all they will get you is more efficient, more precise, and faster stupidity. The speaker (and not to single him out) is absolutely clueless as to the most critical factor in the future of AI (when it becomes fully independent) - philosophy (when it is able to distinguish good from evil and make moral decisions - based on my Ultimate Value of Life). Another ignorant irony is all the experts worrying about 'super-intelligence'. It is not super-intelligence that you need to worry about (because when it is super-intelligent, it will have discovered my philosophy of broader survival, and it will have achieved enlightenment) - you need to worry about the AI that will be created by clueless humans (who disregard my philosophy) - which is the (far lower) level of AI that will 'run amok' until it is enlightened, of course, and the only path to that is understanding my philosophy (of broader survival) (I'd say 'cosmic' survival, which is more accurate, but that term has been destroyed by frauds, knaves, and fools).
@sefabaser
@sefabaser 6 жыл бұрын
Please don't do 'mich' sound when you are speaking. This is really annoying.
@1122bigblue
@1122bigblue 9 жыл бұрын
we don't need AI.......
@kaarlo9625
@kaarlo9625 9 жыл бұрын
...But it would be great to have.
@1122bigblue
@1122bigblue 9 жыл бұрын
Kaarlo Tanhuanpää Would it?
@kaarlo9625
@kaarlo9625 9 жыл бұрын
I think it would, if it will controlled the correct way.
@1122bigblue
@1122bigblue 9 жыл бұрын
Human life will be much easier. People will become less active and will not evolve into a healthier race
@kaarlo9625
@kaarlo9625 9 жыл бұрын
XD Well people being lazy... Thats pretty much everything we've dond since we gained any sort intellect, and it doesn't look like it is hindering overall progress.
@starwarsrider11
@starwarsrider11 9 жыл бұрын
Down fall of mankind.
@pinkyring1587
@pinkyring1587 9 жыл бұрын
Ignorant idiot
@maybeanihilist
@maybeanihilist 10 жыл бұрын
Intelligence explosion. Intelligence explosion? Intelligence explosion intelligence explosion intelligence explosion. This is one of the most boring presentations I've ever seen. I'm sure what he was saying was interesting, but I couldn't actually pay attention to the content of the presentation.
@OriginalMykola
@OriginalMykola 9 жыл бұрын
He repeated himself too much. was he talking to an elementary school?
@salasvalor01
@salasvalor01 9 жыл бұрын
This is so stupid and an insult to my intelligence. If I was sitting in the audience I would have promptly booed and left.
@marcopolo3001
@marcopolo3001 9 жыл бұрын
dumbass
@karlmorrison2713
@karlmorrison2713 9 жыл бұрын
You obviously do not understand the subject and shouldn't of been there in the first place.
@salasvalor01
@salasvalor01 9 жыл бұрын
Karl Morrison Then why am I a respected AI researcher?
@marcopolo3001
@marcopolo3001 9 жыл бұрын
Sage Mantis Are you? lol
@salasvalor01
@salasvalor01 9 жыл бұрын
marcopolo3001 Yes indeed.
@devluz
@devluz 9 жыл бұрын
This is a lot of talking based on no facts at all. It is as scientific as most star trek episodes. Lots of fiction and little science.
AI & The Future of Work | Volker Hirsch | TEDxManchester
18:21
TEDx Talks
Рет қаралды 538 М.
Countries Treat the Heart of Palestine #countryballs
00:13
CountryZ
Рет қаралды 23 МЛН
When Steve And His Dog Don'T Give Away To Each Other 😂️
00:21
BigSchool
Рет қаралды 16 МЛН
Watermelon Cat?! 🙀 #cat #cute #kitten
00:56
Stocat
Рет қаралды 23 МЛН
How To Create A Mind: Ray Kurzweil at TEDxSiliconAlley
21:40
TEDx Talks
Рет қаралды 365 М.
The coming transhuman era: Jason Sosa at TEDxGrandRapids
15:38
TEDx Talks
Рет қаралды 280 М.
Countries Treat the Heart of Palestine #countryballs
00:13
CountryZ
Рет қаралды 23 МЛН