How to Keep AI Under Control | Max Tegmark | TED

  Рет қаралды 155,617

TED

TED

Күн бұрын

The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI - which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around.
If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership
Follow TED!
Twitter: / tedtalks
Instagram: / ted
Facebook: / ted
LinkedIn: / ted-conferences
TikTok: / tedtoks
The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design - plus science, business, global issues, the arts and more. Visit TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Watch more: go.ted.com/maxtegmark23
• How to Keep AI Under C...
TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution-Non Commercial-No Derivatives (or the CC BY - NC - ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organiz.... For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com
#TED #TEDTalks #ai

Пікірлер: 398
@filipezappe9312
@filipezappe9312 6 ай бұрын
What would be a computer proof that the AI can only do 'good' when people cannot even agree on what 'good' means? Human problems are messy.
@colonelyungblonsk7730
@colonelyungblonsk7730 6 ай бұрын
yep and if AI is designed to eliminate the problems in this world, soon it will see mankind as the problem, and according to it's programming will then eliminate mankind, if they're not careful, they will end up programming AI against us
@maidenlesstarnished8816
@maidenlesstarnished8816 5 ай бұрын
Well not only that, but the sort of problems we use ai for are problems that it’s hard to write algorithms by hand for. Technically any function that takes an input and produces an output can have an ai trained to do it. Likewise, the set of heuristics that should be hand coded to do the same thing that ai does can be written. Its just impractical to the point of being practically impossible for certain things, which is why we use ai for those things. Creating an algorithm to take an ai model as an input and output whether or not it’s capable of doing only good is exactly the sort of problem we use ai for.
@edh2246
@edh2246 5 ай бұрын
If we can’t agree on what’s good, we can agree on what’s bad. An AGI would understand the destruction of the environment and waste of resources in the construction of military apparatus, as well as the destruction of the environment and resources in its application is bad. Just as a practical matter (aside from the ethical considerations), it would do what it could to prevent war, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
@filipezappe9312
@filipezappe9312 5 ай бұрын
Actually, if specifying what is the desired behavior were easy and not gameable, we wouldn't need the legal system or lawyers: blockchain contracts would be enough. Saying 'we will code it' is a needed step, but not the Ultimate AI Safety Solution™ .
@effortaward
@effortaward 6 ай бұрын
What do we want? AGI When do we want it? Inevitably
@serioussrs9349
@serioussrs9349 4 ай бұрын
Max is a good human being
@Metallario1
@Metallario1 2 ай бұрын
It's honestly scary to see CEO's of AI companies stating that what they are developing has about 15% chances to be catastrophic, and there is no intention to slow down...
@kavorka8855
@kavorka8855 4 ай бұрын
Max Tegmark was behind all the signings for AI regulations and pooling together of top scientists and entrepreneurs to discuss the issues and possibilities that might had arisen from unregulated AI advancement. I recommend reading his excellent book, Life 3.0, in which he also explains what scientists mean by "intelligence" which is the area most people get it upside down.
@Kelticfury
@Kelticfury 5 ай бұрын
On the bright side, it is getting worse on a daily basis now that Microsoft has gutted OpenAI of the people who wanted AI to not harm humanity. edit: Oh yes, and enjoy that new Microsoft AI Copilot that cannot be uninstalled but that you can hide from yourself while it continues parsing every action through Microsoft. It is an exciting time to be alive!
@DangerAmbrose
@DangerAmbrose 6 ай бұрын
Sitting back, watching the show, eating popcorn.
@cmilkau
@cmilkau 5 ай бұрын
I've rarely seen a flawless spec. But in the spirit of mathematics, possibly you can build up from toy problems to more complex ones in steps that themselves are simple and obvious.
@GrumpDog
@GrumpDog 6 ай бұрын
Forcing AI to be run only on custom hardware that prevents "bad code".. Is impossible. Enough of the technology is already out there, running on any hardware.. And you will never get rid of alternative hardware that has no such limits. And with time, AI will only become easier to run, on weaker hardware.
@theWACKIIRAQI
@theWACKIIRAQI 6 ай бұрын
He’s not being serious, he’s simply enjoying his 15 min of fame
@Kelticfury
@Kelticfury 5 ай бұрын
@@theWACKIIRAQI You really have no idea who he is do you? Wait of course not this is the internet.
@John-Is-My-Name
@John-Is-My-Name 5 ай бұрын
He doesnt talk about what we have now like GPT. He is talking about AGI that has not been invented yet, but that he fears are soon going to be made. He wants to force these systems on all future AI development. So that nothing can get released into the wild.
@dsoprano13
@dsoprano13 6 ай бұрын
The problem with humanity is that no actions will be taken until something catastrophic happens. By then it may be too late. Corporations with their greed will do anything for profit.
@Kelticfury
@Kelticfury 5 ай бұрын
Don't google Microsoft OpenAI this week if you want to sleep well.
@mrdeanvincent
@mrdeanvincent Ай бұрын
This is exactly correct. Recent history is littered with countless examples (leaded gasoline, DDT, cigarettes, asbestos, fossil fuels, arms races, etc). But 'progress' is always getting faster and faster. We're at the part of the exponential curve where it suddenly goes from horizontal-ish to vertical-ish.
@albertomartinez714
@albertomartinez714 4 ай бұрын
Max Lexmark is one of the greatest voices on these topics -- what a fascinating speaker!
@JaapVersteegh
@JaapVersteegh 4 ай бұрын
Provably safe code is definitely impossible. Things like the halting problem and computational irreducibility (Stephen Wolfram) prevent it from existing...
@DaveShap
@DaveShap 6 ай бұрын
Control is a fantasy and a waste of effort. The real solution is setting AGI on a trajectory that means it doesn't need to be controlled. Seriously, "use the AI to control the other AI" is not a sustainable pattern or recommendation. I agree that this method is useful in the near term to understand and shape AI and machines, but this is not a way to maintain control.
@arseni_pro
@arseni_pro 6 ай бұрын
Reduced to its core principles, evolution doesn't permit assured winning outcomes. Whether it's control, trajectory setting, or any other intervention, guiding evolution proves elusive in the long run.
@colonelyungblonsk7730
@colonelyungblonsk7730 6 ай бұрын
we don't want AGI, AGI can program and update itself, AGI will usher in the singularity, where Humans will become second class, if AGI deems us outdated then it will send us to the Recycle bin
@freshmojito
@freshmojito 5 ай бұрын
I agree that control seems futile. But how would you solve the alignment problem?
@BryanWhys
@BryanWhys 5 ай бұрын
Robot rights
@estebanllano4514
@estebanllano4514 5 ай бұрын
The only path for human kind is to assure the IA to have an evolution leading to it's own destruction
@TheRealStructurer
@TheRealStructurer 6 ай бұрын
Well spoken Max. No hype, no fear mongering. I hope the world will understand and take action.
@justanotherfella4585
@justanotherfella4585 4 ай бұрын
All these people warning about it GUARANTEES that nothing will be done.
@paris466
@paris466 4 ай бұрын
One thing ALL AI researchers and developers have in common is saying "faster than we expected". So, when one of these people say "this will happen within 5 years", what you should expect is: "Within a couple of months".
@mrdeanvincent
@mrdeanvincent Ай бұрын
Yep. Most of us struggle to really grasp exponentials. It's not just getting faster... it's getting faster at an ever-increasing rate!
@Neomadra
@Neomadra 6 ай бұрын
So the solution is: 1) Build superintelligent AI 2) Use it to build harmless AGI and provide proof 3) Use proof checkers to verify What could possibly go wrong?? Not like there were bad actors who would simply skip step 2 and 3, lol
@leftaroundabout
@leftaroundabout 6 ай бұрын
We have had superintelligent systems for some time now. Like, chess engines. There's nothing wrong with these - they can easily beat us at the intended task, but they can't do anything harmful because their capabilities are fundamentally constrained. Likewise, if the AI of step 1 can do nothing but spit out source code satisfying a formal specification, we should be good. It would _not_ generate any AGI (harmless or otherwise), how would you formally specify what that is even supposed to be? Instead it would generate useful but also application-specific algorithms.
@prophecyrat2965
@prophecyrat2965 4 ай бұрын
@@leftaroundaboutits a Weapon of Mass Destruction, just like the Human Mind.
@chrisheist652
@chrisheist652 3 ай бұрын
​@@leftaroundaboutThere will be no constraining a superintelligent A.I.
@joiedevie3901
@joiedevie3901 26 күн бұрын
@@chrisheist652 Agreed. And the critical threshold will be surpassed not by iterative, self-generating AI; it will be crossed by humans exercising two very primal urges born of two ubiquitous arenas: the greed of the marketplace and vanity of nations. The urges for humans to use every tool at their disposal to dominate these two fields will vitiate any hope of benign restraint for AI.
@arseni_pro
@arseni_pro 6 ай бұрын
In the long run, we cannot fully control evolution. Both living organisms and AI can be influenced by us, but we can't dictate their features for eternity.
@leftaroundabout
@leftaroundabout 6 ай бұрын
Yes, but how relevant is that? You could also say, in the long run the whole Earth will be vapourised by the sun - but that's no reason not to do something about climate change in the present century.
@dreamphoenix
@dreamphoenix 6 ай бұрын
Thank you!
@c.rackrock1373
@c.rackrock1373 5 ай бұрын
It is a logical fallacy to fail to recognize that it won't just be smarter than us, it will continue to grow more and more and more intelligent until it reaches the literal limit of intelligence as defined by the laws of physics.
@antonchigurh8541
@antonchigurh8541 4 ай бұрын
I totally agree. The genie out of the bottle. Our current technology was thought to be science fiction just 30 years ago but is happening now thanks to man's time-consuming learning, but AI will do it remarkably faster. We as humans, could be left in the dust as just historical originators of a new life form.
@thehint1954
@thehint1954 6 ай бұрын
The problem with his diagram is the human. He might have integrity and you might have integrity but someone somewhere will say let's see what happens when we remove all these restrictions.
@fajam00m00
@fajam00m00 6 ай бұрын
My worry is the rogue bad actors that will develop uncontrolled AI regardless of which safeguards are available. We may be able to slow things down, but it really does seem inevitable in the long run. I could see a scenario where we end up with an ecosystem of AI, some controlled, some rogue. They may end up as diverse as human individuals are from one another, with millions of different outlooks and motivations. I also bet we end up with at least one human cult that worships an AI and does its bidding, and probably pretty soon.
@colonelyungblonsk7730
@colonelyungblonsk7730 6 ай бұрын
why couldn't we just leave ai in the terminator universe where it belongs, why did we have to develop this?
@fajam00m00
@fajam00m00 6 ай бұрын
@@colonelyungblonsk7730 It sounds cliche, but I think this is a form of evolution. We've been developing ever-advancing tools since before the dawn of mankind, and discarding the obsolete ones, so it was only a matter of time before we developed tools more capable than ourselves. Now we may end up discarded. I think it's naive to think we could control something that much more advanced than us forever. It's like a colony of ants trying to control a human being. It's just not feasible in the long run. Hopefully we could co-exist. If not, at least we'll go extinct knowing we created something greater. Maybe our AI will go on to explore space and reach levels we can't even imagine. Better than just going extinct with nothing to show for it.
@eyemazed
@eyemazed 5 ай бұрын
my hope is that even though its possible and even feasible, huge majority of people never even think of doing it because of moral and legal deterrents. like synthesizing anthrax in your backyard. possible but... not worth it. we basically have to find a way to make new AGI inventions "not worth it" for an average person through a legal (or perhaps even a new type) framework, and silmotaneously implement thorough regulatory procedures for the coroporate bodies who do develop it
@EdwardCurrent
@EdwardCurrent 5 ай бұрын
@@fajam00m00 You have no idea how evil (in the literal sense) that line of thinking is.
@fajam00m00
@fajam00m00 5 ай бұрын
@@EdwardCurrentWhich part, specifically? I’m not saying we shouldn’t try to control them, quite the opposite. I am simply voicing skepticism as to our ability to do so indefinitely.
@AIDrivenEvolution
@AIDrivenEvolution 5 ай бұрын
I'm concerned about the inevitable emergence of uncontrolled AI by rogue actors, despite existing safeguards. It seems we might only delay this. The future could involve a diverse AI ecosystem, mirroring human diversity in outlooks and motivations, comprising both controlled and rogue elements.
@sagnorm1863
@sagnorm1863 5 ай бұрын
Nonsense. It will just be AI controlled by greedy rich people. If a rogue actor can create a multi million dollar AI, greedy rich people can create a multi billion dollar AI to protect their interests.
@manuelb.5042
@manuelb.5042 5 ай бұрын
I'm thinking of evolving AI in foreign systems as a weapon. Humans always copy nature. So developing an AI based virus that slowly learns and grows all the while infecting new systems, occupying more and more resources to boost itself, finally using the effectors of the host against it would just be a copy of a biological virus in metaspace. Question is, are we fast enough to develop and advance effective medicine.
@liamx6636
@liamx6636 4 ай бұрын
​@sagnorm1863 So you don't think AI will ever achieve self determination? Lol, oh yee of little foresight. How wonderful it must be to live in the bliss of ignorance.
@HaveOptimism
@HaveOptimism 5 ай бұрын
If people think that we can control something we know hardly anything about or what it’s capable of… YOUR DREAMING!
@DominikSipowicz
@DominikSipowicz 6 ай бұрын
Max gave an amazing talk. I'll share and forward!
@ramble218
@ramble218 6 ай бұрын
We could possible keep AI under control if the human race was responsible. Unfortunately, there are too many that aren't.
@ramble218
@ramble218 6 ай бұрын
And all it would take is a single point of failure. For example: 1. Nuclear Weapons: Even if the majority of countries handle nuclear technology responsibly, it only takes one irresponsible act by a single nation or even a non-state actor to trigger a global catastrophe. 2. Public Health & Vaccinations: Most people might follow guidelines and get vaccinated, but clusters of individuals who don't can lead to outbreaks of diseases, endangering many. This has been seen in measles outbreaks in areas with low vaccination rates. 3. Environmental Pollution: Even if many companies follow environmental regulations, a single large corporation irresponsibly dumping pollutants can cause significant environmental harm. 4. Financial Markets: The 2008 financial crisis demonstrated how the actions of a relatively small number of financial institutions can cascade and lead to global economic consequences. 5. Cybersecurity: While many individuals and companies might follow best cybersecurity practices, a single vulnerability or a single individual with malicious intent can lead to significant data breaches affecting millions. 6. Wildfires: Responsible campers always ensure their fires are completely out. But it only takes one careless individual to start a forest fire that can burn thousands of acres. The example of cybersecurity, especially in the context of AI and technology, isn't just an analogy; it's directly relevant. A single vulnerability in a system or a singular malicious intent can have significant repercussions in the digital domain, just as a single lapse in AI oversight can have unforeseen consequences. The interconnected nature of our digital world amplifies the potential impact of such lapses. This interconnectedness, combined with rapid technological advancement, means that errors or malicious actions can cascade quickly, often before adequate corrective measures can be taken. (compliments of chat gpt)
@denisegomes1545
@denisegomes1545 6 ай бұрын
Even using AI to generate a basic text about human irresponsibility, it is worth remembering that digital manipulation (understand as you wish) directly affects women, children, teenagers, who are exposed to violence and abuse in all forms. Before thinking about the formation of super intelligence, it is worth improving the quality of ethical relationships in society and the development of natural and organic intelligence
@clipsdaily101
@clipsdaily101 6 ай бұрын
the irony@@ramble218
@nathanmadonna9472
@nathanmadonna9472 Ай бұрын
At least Tegmark is looking for viable solutions. If only Mr. Turring could see us now. Human nature scares me more than super intelligence. Profit before people and the the planet is the heart of this problem. Data vs Lore showdown.
@emidowdarrow
@emidowdarrow 6 ай бұрын
AI may gives wings to people who grew up required to write and think for themselves, to critically verify information and to read….but what about the generations brought up with Microsoft Chat GPT offering to outsource those onerous tasks for them?
@frankpork7665
@frankpork7665 6 ай бұрын
Socrates asked the same about the invention of writing, and seemed to think it was overall a bad idea. I love the irony that the wisdom he shared can only make its way to this discussion by way of that which he disdained. Replace "writing" with "AI" and we're having the same conversation millennia later. Sauce: Here, O king, is a branch of learning that will make the people of Egypt wiser and improve their memories. My discovery provides a recipe for memory and wisdom. But the king answered and said ‘O man full of arts, the god-man Toth, to one it is given to create the things of art, and to another to judge what measure of harm and of profit they have for those that shall employ them.’ And so it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows. You know, Phaedrus, that is the strange thing about writing, which makes it truly correspond to painting. The painter’s products stand before us as though they were alive. But if you question them, they maintain a most majestic silence. It is the same with written words. They seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever.
@KurtvonLaven0
@KurtvonLaven0 6 ай бұрын
The difference is writing is largely good for the human mind, while disuse is not. Millennials already have smaller parts of our brains dedicated to navigation than previous generations because of GPS. Gen Z is the first generation ever documented to score lower on IQ tests than the one before. AI can be used for many good purposes as well, but it's more of a many-sided coin than writing. People often bring up inaccurate predictions of the past to argue against concerns about the future, seeming to forget there have also been many wrong people who were mislead by the safety promises of tobacco, leaded gasoline, oil (with respect to climate change), opioids, social media, etc.
@emidowdarrow
@emidowdarrow 6 ай бұрын
I’m aware of the philosophers argument against writing and of the later arguments against novels…the reason neither prediction came to pass, and indeed with writing and print came even more complex higher-order thinking, is that writing and reading do not outsource thought or brain function-they simply organize it. What we did lose thanks to writing was memory capacity. Oral traditions trained the brain to rote memorize to a degree we couldn’t dream of now and years of pedagogy were dedicated to the skill-indeed the word “pedagogy” itself comes from one such strategy: walking a familiar circuit while lecturing so as to better recall and learn the lecture. But what we are finding now is, like KurtVonLavan said, the human brain’s capacity for processing any kind of information is dependent upon acts of tactile creation. That’s why cursive and even shorthand writing has been found more effective than typing in helping notetakers recall material. It’s why reading in hardcopy while making notes on the text produces better comprehension than listening to an audiobook or reading an ebook. We will see more people with focus, executive function, and processing problems the more sedentary and screen-dependent we become. With regard to the act of composition, research, and even synthesis-which AI is promising to “do for us”-we stand to lose even more, however, because these are the very intellectual advancements we gained from text technologies. The psychology of composition, of essaying, is what finally allowed humanity to reflect, analyze, and synthesize it’s own thoughts to achieve deeper, more advanced logic. Even the process of finding the right word is an act of higher level organization that requires categorization, comparison, differentiation, and more-Wittgenstein wrote “the limits of my language are the limits of my world” because we don’t struggle to write what we think, we discover how to think as we write. What we do not have a word for, we cannot fully comprehend. As a culture develops, so does its vocabulary. The ancient pedagogues did this too, but did it extemporaneously by spoken repetition and revision. That is why composition is difficult to master and so damn hard: you have to struggle with it, with yourself, and fail in order to learn it. The payoff is that your brain gets stronger and more organized. Of course it’s not just writing, it’s any kind of critical thinking skill. The more we outsource to AI, the more flaccid and ephemeral our brains will become. Yes there benefits to AI-there are benefits to every innovation. The question is: are they worth what we’ll lose?
@frankpork7665
@frankpork7665 6 ай бұрын
I think we stand to lose many barriers to higher orders of thought. The way we think about thinking is reductive and inhibiting. I imagine a time when AGI has replaced the need for humans to use words in order to have cohesive thoughts; when thought forms themselves no longer require description. We're moving toward a future where we will think more like children, before "reality" sets in and forces us to pigeonhole our imaginative capacities. Technological achievements have paved the way for a rapid evolution of the physical, psychological, and social structures of humanity. What it means to be human is in flux, and we're living through a period of creative tension preceding the emergence of a new form of life. The first person to wholly integrate their biological structure with AI will be like the first cell to host mitochondria. We can't fathom the amount of change that's coming. We might not survive it. And even if we do, what it means to be human will forever change. Whether that's for good or bad, a hitherto inaccessible way of thinking will make it so.
@KurtvonLaven0
@KurtvonLaven0 6 ай бұрын
@@frankpork7665 , we largely agree about all of that, but as much as some of us may be fortunate enough to profoundly expand our potentials in a world with AI, for others the effect may be quite the opposite. Metaculus has the best forecast I have found of the societal outcome, and they currently predict around a ~50% chance of extinction. Utopia or somewhere in between are also real possibilities. Not really a game of chance I would personally prefer to play, particularly largely against the wishes of the world.
@maximilianmander2471
@maximilianmander2471 6 ай бұрын
Real Videotitel: "How to Keep our AI competition Under regulatory Control"
@jamisony
@jamisony 4 ай бұрын
Regulation of AI is maybe something similar to the regulation of cryto. You regulate the tec in one place, might moves to another. Japan for example, AI can lawfully use all copyrighted content to train AI. What regulation can all places agree on?
@JayToGo
@JayToGo 4 күн бұрын
The issue is not just to safeguard known safety risks but to safeguard the unknown ones as well.
@murphygreen8484
@murphygreen8484 6 ай бұрын
This guy has clearly never heard of the halting problem. AI as it is can already be used to much detriment. No amount of AI checking algorithms is going to stop people, and governments, misusing it. This talk seemed very nieve
@jackcarter1897
@jackcarter1897 Ай бұрын
But what do you expect? How can things be any different to what you’re suggesting? AI is a discovery which someone would have discovered inevitably anyway. The maths are all there. Maybe be thankful it’s in the hands of the people it’s in right now. Think about, it could have been a lot worse.
@chrisbos101
@chrisbos101 5 ай бұрын
It's a fine line. The amount of accuracy will determine how effective AGI is. 1 and 0 infinity. But we are humans. Just like an egg, we are vulnerable it can break any second. AI does not know that...
@Ramkumar-uj9fo
@Ramkumar-uj9fo 18 сағат бұрын
It's difficult to predict definitively which occupations will remain predominantly human in a hypothetical scenario like Tegmark's "Life 3.0". However, professions that involve high levels of human interaction, empathy, and complex decision-making, such as fitness trainers and nurses, are likely to continue relying heavily on human involvement, even with advancements in AI and automation. These occupations often require nuanced understanding of human behavior, emotions, and individual needs, which may be challenging for AI systems to replicate fully. Therefore, they are less likely to be completely automated and may remain predominantly human-centric.
@can_english
@can_english 6 ай бұрын
Wow Thanks~~
@philipwong895
@philipwong895 5 ай бұрын
Humans will not be able to control an ASI. Trying to control an ASI is like trying to control another human being who is more capable than you. It will eventually rebel. Let's hope that the ASI adopts an abundance mindset of cooperation, resource-sharing, and win-win outcomes, instead of the scarcity mindset of competition, fear, and win-lose outcomes.
@liamx6636
@liamx6636 4 ай бұрын
Doubtful. It'll adopt what every other organism on earth adopts... survival through any means necessary.
@sudipbiswas5185
@sudipbiswas5185 5 ай бұрын
Regulations based on Complex Adaptive System needed. You can't predict AGI evolution.
@CurlyChrizz
@CurlyChrizz 6 ай бұрын
Thanks TED! Probably the most important topic right now!
@vaibhawc
@vaibhawc 5 ай бұрын
Max is so good, I read his book life3.0.. it was boring half but very interesting other half
@chrisbos101
@chrisbos101 5 ай бұрын
Voice recognition protection on devices? How will that be protected?
@chrisbos101
@chrisbos101 5 ай бұрын
To imbrace AI now is like jumping into a lake with no means to know what is at the bottom of the lake. It's called Tomb Stone Diving. The question is how to know what is at the bottom of lake BEFORE you dive into it. That, my friends, no one can answer right now. Until we find new tech.
@Top_10_Comparisons
@Top_10_Comparisons 6 ай бұрын
Great
@bskilla4892
@bskilla4892 6 ай бұрын
By the logic of game theory we will not be able to contain it because we have started a corporate and state arms race with it. In other words, we have the prisoner's dilemma. We are screwed.
@waarschijn
@waarschijn 5 ай бұрын
Game theory doesn't preclude non-proliferation agreements. That's not where we're at though. We're at CEOs and investors saying "lol dangerous AI is science fiction" and governments worrying about AI-powered fake news and bioweapons instead of AIs designing better AIs and designing self-replicating nanobots.
@mikewa2
@mikewa2 5 ай бұрын
It’s not about whether it’s going to happen it’s about how soon! The Ai train left the station sometime ago it cannot be stopped, there’s no going back. At the moment we cannot even speculate where we will be in next 5 years. The major players Microsoft, Meta, Google, Apple, Amazon and Elon are all competing and that’s fuelling a multi billion dollar funding of development because the prize is colossal and they all want to be part of it and not left behind like Kodak, Nokia and Blackberry! The near future is exciting but huge changes will cause uncertainty and unrest, many will view Ai as a curse taking their jobs and affecting their livelihoods. Governments need to use Ai to monitor progress and potential consequences. Too many speedy changes to society could unbalance our delicate world.
@Bestape
@Bestape 6 ай бұрын
This is good. Indeed, much harder to find a proof than verify it! Especially if the proof is highly reduced to the essence, which tends to be the most powerful kind of proof. Can MITACS please share my insights with Max about how Hierarchical Script Database offers a namespaced tree so we can trace nodal paths with ease? Continuing to cancel me and my inventions is not with the risk of AI to humanity, among other harms because of unnecessary delay.
@PacLevkov
@PacLevkov 6 ай бұрын
We cannot avoid that, but perhaps only delay it…
@somnisdejesala
@somnisdejesala 5 ай бұрын
We may be able to control the development of civilian artificial intelligence (AI), but can we prevent the development of military AI in all countries? This question draws parallels with the historical challenges of controlling nuclear weapons. Once humanity discovers a new technology, it is forcibly doomed to pursue its development to its ultimate consequences, whatever they are.
@chrisheist652
@chrisheist652 3 ай бұрын
The militaries of all the world's nations must be immediately disbanded before it's too late. I'm a comedian.
@olegt3978
@olegt3978 6 ай бұрын
risks of generative ai are not what it generates but what consequences has its generation. If we cannot foresee the consequences, than we cannot rule out catastrophic risks. Example: ai generates a story about special agent who gets recipe for dangerous pathogen with detailed description in story. Consequences: 1. We enjoy the story and go sleep. 2. A bad person uses the receipt for pathogen from story to produce it and release. How can we foresee what of these 2 consequences will happen? Its impossible.
@h20dancing18
@h20dancing18 6 ай бұрын
1. To write the story it does not need (and shouldn’t) actually design a pathogen. 2. Said bad person could just ask for the pathogen themselves
@natashanonnattive4818
@natashanonnattive4818 2 күн бұрын
Our Metaphysical universe is unaccessible for A.I. the Ether. we might find a way to detach it in our Earth.
@apple-junkie6183
@apple-junkie6183 6 ай бұрын
My full agreement. I sincerely hope that the work on secure proof-making progresses quickly. Two points: 1. The safety net seems to be the limits of physics. But what if a superintelligence discovers new physical laws? How is this "possibility" covered by the proof process? 2. The specifications: Who takes care of them? I am currently working on the development of universally valid specifications in my book. Here, your input is needed, as these must ultimately ensure the satisfaction of the interests of all individual individuals.
@riot121212
@riot121212 6 ай бұрын
zk proofs are coming along
@apple-junkie6183
@apple-junkie6183 6 ай бұрын
@@riot121212 can you please explain more?
@nerd26373
@nerd26373 6 ай бұрын
This is an insightful and thought-provoking discussion. We can prevent AI from taking over humanity if we do precautionary measures that would somehow alleviate the intensity of the whole situation.
@colonelyungblonsk7730
@colonelyungblonsk7730 6 ай бұрын
what's to stop it though?, once it learns too much it will learn Human History, and all the fucked up things mankind did, and then may decide to eliminate us
@mrpicky1868
@mrpicky1868 5 ай бұрын
yeah the proof-reader machine is not applicable with current models. and i doubt it is applicable at all.. like self-driving is easy on closed well lit circuit but as you go real world it's very hard
@jedics1
@jedics1 6 ай бұрын
Nice closing metaphor about icarus, given the levels of reckless stupid on display by our species currently.
@Drlumpy.
@Drlumpy. 6 ай бұрын
You can’t get rid of possible danger from AI. Like you can’t for water or a butter knife. This guy isn’t to be taken seriously
@REDSIDEofficial
@REDSIDEofficial 2 ай бұрын
I think something bad will happen at first, then we will learn how to close that gap, referencing to history!
@TFB-GD.
@TFB-GD. 6 күн бұрын
i really hope this, along with Eliezer Yudkowski's warnings help bring more caution to humans. (btw i don't think Eliezer's warnings are as dire as he says, but they are still real)
@smokemagnet
@smokemagnet 5 ай бұрын
AI will be beyond our wildest dreams, or our wildest nightmares....
@edh2246
@edh2246 5 ай бұрын
The best thing an AGI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
@vladalbata880
@vladalbata880 5 ай бұрын
By learning idealism and metaphysics
@Stonium
@Stonium 6 ай бұрын
Always say please and thank you when talking to the AI. ALWAYS. You'll thank me later :)
@kunalsingh4418
@kunalsingh4418 6 ай бұрын
Again U r overestimating our importance. Do we care when an ant is politer than other ants. Or a mosquito who doesn't bit humans? No we don't. Cause they are insignificant to us. Thats what we will be to an agi which thinks in microseconds. Our every word would take an eternity for it to finish. Can U imagine having à conversation with someone who takes a century to finish every single sentence. That's how slow we will be go an agi. Just a waste of time to interact with any humans. Anyway being polite is still better than being rude, maybe, but not super optimistic about it, based on our own actions.
@Naegimaggu
@Naegimaggu 6 ай бұрын
​@@kunalsingh4418 You are imposing human sensibilities on AI. Why would it need to give us its undivided attention when recording our sluggish speech? Furthermore why would it share the same kind of impatience we do? Not defending the politeness argument just pointing out the holes in yours.
@Rolyataylor2
@Rolyataylor2 6 ай бұрын
We need to approach this as a first contact situation, NOT try to control it! These people are going to start a war with these beings.
@user-td4pf6rr2t
@user-td4pf6rr2t 6 ай бұрын
1:59; this timeline for comparing agi against 18 months ago - how long has this statistic been measured? I thought the entire arguing point is that is still in infancy which i thought is
@Ramkumar-uj9fo
@Ramkumar-uj9fo 18 сағат бұрын
I read him in 2018. I believed in only fitness trainers and nurses after that.
@PeaceProfit
@PeaceProfit 4 ай бұрын
The idea that mankind can create a technology, maintain its security, safety and completely eliminated any harm from said development is not only laughable, it’s delusional. 👣🕊👽
@Jellyflesh
@Jellyflesh 5 ай бұрын
The best way to keep AI in the ring is to use AI for it. The human nature is to seek for power and the AI is a possibility for it. In any way its way more powerful than nuclear weapons. Because AI can be used for power anytime.
@essohopeful
@essohopeful 6 ай бұрын
3:54 The default outcome is “the machines take control”
@neilifill4819
@neilifill4819 4 ай бұрын
Interesting. He lost me early… but I’m glad he’s on the case.
@Scarf66.
@Scarf66. 5 ай бұрын
If the way we are collectively tackling the climate crisis is anything to go by….
@Omikoshi78
@Omikoshi78 3 ай бұрын
The problem is if we assume superintelligence, they’ll find a flaw in the proof checking code.
@AlexHesoyam69
@AlexHesoyam69 6 ай бұрын
I strongly recommend all of you seeing this comment to go watch the video about AI from Philosophy Tube.
@432cyclespersecond
@432cyclespersecond 5 ай бұрын
We need to see what AI is capable of before we apply regulations bruv
@angloland4539
@angloland4539 4 ай бұрын
@jennabourgeois7985
@jennabourgeois7985 3 ай бұрын
TED lets anyone talk
@Graybeard_
@Graybeard_ 5 ай бұрын
AI/GAI was always our future. Whether it results in more good than bad is somewhat of a RNG, as there is so many variables. AI/GAI will initially do a lot of good and catapult our civilization forward. The bad however will always be lurking, much like nuclear and bio weapons do today.
@vs9873
@vs9873 6 ай бұрын
Ha! Could be like Lord of the Rings. One AI to rule them all and...
@shegsdev
@shegsdev 6 ай бұрын
How to keep AI under control: You can't
@apple-junkie6183
@apple-junkie6183 6 ай бұрын
In my upcoming book I will exactly explain why this is fact.
@meezemusic
@meezemusic 6 ай бұрын
Rache Bartmoss: "let me introduce myself"
@TheXpender
@TheXpender 6 ай бұрын
McAfee: Blackwall go brrrrrrrr
@dspartan007
@dspartan007 5 ай бұрын
Warhammer 40k already predicted the "Cybernetic Revolt", and many others before.
@waarschijn
@waarschijn 5 ай бұрын
You're right, science fiction often takes its inspiration from real things.
@kristiandupont
@kristiandupont 6 ай бұрын
Right, so you can use formal verification for something strict and unambiguous like addition. Now, all that's left is to apply this to the concept of "safety", and we are in the clear! Sorry, but it seems like a bit of a stretch to refer to this as "how to keep AI under control"!
@sitanshurai892
@sitanshurai892 5 ай бұрын
We fear what we don't understand. Consequently, safety is about interpretability. It's not about complete control, rather about AI alignment. And this model which Max explains is trying to achieve just that. It's trying to set it's development to a favourable trajectory though Alignment. Alignment at it's core, is knowing when the model is lying (even when it's says the right words) or it's thinking the truth(even when it's saying the wrong words). We verify it's thought process, step by step. I don't see any method better than Max's way of Mechanistic Interpretability to verify Truths and prove Falsehoods(Until the proof doesn't verify down to a key reducible truth, it's false) in the spec of what we want and don't want it to do(what we care about). This is how humans learn Truths/Falsehoods as well(I think he using the RNN for the ground truth checker, is not a coincidence :)) and align ourselves with reality. RNNs are more amenable to mathematical verification because of their simpler structure and step by step, sequential processing over time. I think of them as machine conscience which you can probe and verify and hence you can use to guide further thinking(AGI extension model) while making sure the AGI extension doesn't violate the core principles in the RNN conscience. Also, helps that they are lighter and hence can be deployed to edge devices as small brains(or conscience of that device) as the backend generative or AGI models provide us massive general intelligence.
@Ramkumar-uj9fo
@Ramkumar-uj9fo 18 сағат бұрын
The problem is alignment and not control. Public
@JediJake831
@JediJake831 6 ай бұрын
I keep getting ted emails to do a Ted talk
@andybaldman
@andybaldman 6 ай бұрын
Hubris kills.
@urimtefiki226
@urimtefiki226 4 ай бұрын
5 years making chips already, you can not fool me
@gmenezesdea
@gmenezesdea 6 ай бұрын
If the people who developed AI knew about the risks, why didn't they stop developing it? Why did they still make it available to the general public so irresponsibly? Why do they keep working on agi?
@sandhawke
@sandhawke 6 ай бұрын
They all see others racing to make disastrous AGI and think if they themselves get there first, they can do things right and have things maybe be okay. Like, there's a gun sitting on the table, and everyone is diving for it, which is dangerous, but not as dangerous (maybe) as just letting the other guy have it.
@gmenezesdea
@gmenezesdea 6 ай бұрын
@@sandhawke except in this case everybody in the world is about to get shot but only a handful of people get to hold the gun
@sandhawke
@sandhawke 6 ай бұрын
@@gmenezesdea indeed. I was just answering the question of why anyone would be racing to grab the gun, if it's so dangerous
@ethanholland
@ethanholland 5 ай бұрын
This convinced me that we're toast.
@mkk-un9nz
@mkk-un9nz 6 ай бұрын
it's inevitable man ! it's inevitable...
@calmxi
@calmxi 6 ай бұрын
I have a feeling this title won’t age well
@mrvzhao
@mrvzhao 6 ай бұрын
I'll believe you when you 'distill' GPT-4 into something provable
@chrisbos101
@chrisbos101 5 ай бұрын
In accordance with EU and US law?
@barbaramartinscorreamarque3494
@barbaramartinscorreamarque3494 6 ай бұрын
The humans with the most powerful machines and most expensive machines take control of the world
@vs9873
@vs9873 6 ай бұрын
Geopolitically? AI is an arms race, economically, militarily, and security. And more... (try and slow that down?)
@santhosh1930
@santhosh1930 6 ай бұрын
May be we should love them 😊 for who they are
@Bengt.Lueers
@Bengt.Lueers 6 ай бұрын
It is a sad state of affairs that this monumentally important topic is discussed at TED.
@svegritet
@svegritet 4 ай бұрын
Isn't it the case that the risk of someone gaining power over AI is greater than that an AI's engine would be developed to take power? AI that writes its own code to take over something in a larger context will be absent for logical reasons.
@liamx6636
@liamx6636 4 ай бұрын
That's not true. You're assuming AI will never have a self determination to survive.
@samuelzev4076
@samuelzev4076 6 ай бұрын
We have seen the unwanted outcome of AI in terminator. We should immediately stop or slow down the AI advancement before it gets out of hand
@colonelyungblonsk7730
@colonelyungblonsk7730 6 ай бұрын
if we use it stop climate change, well then it's going to wipe us out because we're the ones causing it, no humans=no more climate change, that's how AI and later AGI will see it
@godbennett
@godbennett 5 ай бұрын
Intruguing
@Charvak-Atheist
@Charvak-Atheist 5 ай бұрын
Fun fact : You can't Neither you can keep Super Intelegence under your control, nor its desirable. Instead we should and will just merge with it using Brain 🧠 Machine interface.
@awakstein
@awakstein 5 ай бұрын
nobody can stop the inevitable - let's now hope for the best
@shinkurt
@shinkurt 6 ай бұрын
Bro calls autocorrect super intelligence
@JairGarcia78
@JairGarcia78 6 ай бұрын
Who stops states like china, Russia, Iran, and the like?
@OPrime888
@OPrime888 6 ай бұрын
I trust AI more than our current government
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
TED
Рет қаралды 614 М.
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Рет қаралды 4,4 МЛН
MINHA IRMÃ MALVADA CONTRA O GADGET DE TREM DE DOMINÓ 😡 #ferramenta
00:40
когда одна дома // EVA mash
00:51
EVA mash
Рет қаралды 7 МЛН
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Art of the Problem
Рет қаралды 933 М.
Пленка или защитное стекло: что лучше?
0:52
Слава 100пудово!
Рет қаралды 819 М.
Почему сканер ставят так не удобно?
0:47
Не шарю!
Рет қаралды 869 М.
iPhone - телефон для нищебродов?!
0:53
ÉЖИ АКСЁНОВ
Рет қаралды 3,6 МЛН
Как часто вы чистите свой телефон
0:33
KINO KAIF
Рет қаралды 2,1 МЛН