It's honestly scary to see CEO's of AI companies stating that what they are developing has about 15% chances to be catastrophic, and there is no intention to slow down...
@harrykatsaros5 ай бұрын
We can’t slow down because we can’t trust others to slow down. For example if we slow down, but China doesn’t, then we’re going to look back one day and realise we’re totally fkd because our tech is suddenly decades behind in the blink of an eye.
@bauch165 ай бұрын
Imagine u send ur child to a school where there is a 15% Chance to die
@XxSphinx140xX2 ай бұрын
@@bauch16Hogwarts
@keizbot25 күн бұрын
Businesses in general are perfectly fine risking everybody's wellbeing if the potential profit is high enough. This is why they must be highly regulated.
@DaveShap Жыл бұрын
Control is a fantasy and a waste of effort. The real solution is setting AGI on a trajectory that means it doesn't need to be controlled. Seriously, "use the AI to control the other AI" is not a sustainable pattern or recommendation. I agree that this method is useful in the near term to understand and shape AI and machines, but this is not a way to maintain control.
@arseni_pro Жыл бұрын
Reduced to its core principles, evolution doesn't permit assured winning outcomes. Whether it's control, trajectory setting, or any other intervention, guiding evolution proves elusive in the long run.
@colonelyungblonsk7730 Жыл бұрын
we don't want AGI, AGI can program and update itself, AGI will usher in the singularity, where Humans will become second class, if AGI deems us outdated then it will send us to the Recycle bin
@freshmojito Жыл бұрын
I agree that control seems futile. But how would you solve the alignment problem?
@BryanWhys Жыл бұрын
Robot rights
@estebanllano4514 Жыл бұрын
The only path for human kind is to assure the IA to have an evolution leading to it's own destruction
@serioussrs934911 ай бұрын
Max is a good human being
@filipezappe9312 Жыл бұрын
What would be a computer proof that the AI can only do 'good' when people cannot even agree on what 'good' means? Human problems are messy.
@colonelyungblonsk7730 Жыл бұрын
yep and if AI is designed to eliminate the problems in this world, soon it will see mankind as the problem, and according to it's programming will then eliminate mankind, if they're not careful, they will end up programming AI against us
@maidenlesstarnished8816 Жыл бұрын
Well not only that, but the sort of problems we use ai for are problems that it’s hard to write algorithms by hand for. Technically any function that takes an input and produces an output can have an ai trained to do it. Likewise, the set of heuristics that should be hand coded to do the same thing that ai does can be written. Its just impractical to the point of being practically impossible for certain things, which is why we use ai for those things. Creating an algorithm to take an ai model as an input and output whether or not it’s capable of doing only good is exactly the sort of problem we use ai for.
@edh2246 Жыл бұрын
If we can’t agree on what’s good, we can agree on what’s bad. An AGI would understand the destruction of the environment and waste of resources in the construction of military apparatus, as well as the destruction of the environment and resources in its application is bad. Just as a practical matter (aside from the ethical considerations), it would do what it could to prevent war, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
@filipezappe9312 Жыл бұрын
Actually, if specifying what is the desired behavior were easy and not gameable, we wouldn't need the legal system or lawyers: blockchain contracts would be enough. Saying 'we will code it' is a needed step, but not the Ultimate AI Safety Solution™ .
@nathanmadonna94728 ай бұрын
At least Tegmark is looking for viable solutions. If only Mr. Turring could see us now. Human nature scares me more than super intelligence. Profit before people and the the planet is the heart of this problem. Data vs Lore showdown.
@effortaward Жыл бұрын
What do we want? AGI When do we want it? Inevitably
@James1787Madison6 ай бұрын
Imminently.
@kavorka885511 ай бұрын
Max Tegmark was behind all the signings for AI regulations and pooling together of top scientists and entrepreneurs to discuss the issues and possibilities that might had arisen from unregulated AI advancement. I recommend reading his excellent book, Life 3.0, in which he also explains what scientists mean by "intelligence" which is the area most people get it upside down.
@DangerAmbrose Жыл бұрын
Sitting back, watching the show, eating popcorn.
@paris46611 ай бұрын
One thing ALL AI researchers and developers have in common is saying "faster than we expected". So, when one of these people say "this will happen within 5 years", what you should expect is: "Within a couple of months".
@mrdeanvincent7 ай бұрын
Yep. Most of us struggle to really grasp exponentials. It's not just getting faster... it's getting faster at an ever-increasing rate!
@TheRealStructurer Жыл бұрын
Well spoken Max. No hype, no fear mongering. I hope the world will understand and take action.
@justanotherfella458511 ай бұрын
All these people warning about it GUARANTEES that nothing will be done.
@Kelticfury Жыл бұрын
On the bright side, it is getting worse on a daily basis now that Microsoft has gutted OpenAI of the people who wanted AI to not harm humanity. edit: Oh yes, and enjoy that new Microsoft AI Copilot that cannot be uninstalled but that you can hide from yourself while it continues parsing every action through Microsoft. It is an exciting time to be alive!
@c.rackrock1373 Жыл бұрын
It is a logical fallacy to fail to recognize that it won't just be smarter than us, it will continue to grow more and more and more intelligent until it reaches the literal limit of intelligence as defined by the laws of physics.
@antonchigurh854111 ай бұрын
I totally agree. The genie out of the bottle. Our current technology was thought to be science fiction just 30 years ago but is happening now thanks to man's time-consuming learning, but AI will do it remarkably faster. We as humans, could be left in the dust as just historical originators of a new life form.
@carmenmccauley5855 ай бұрын
In seconds
@frederickschulze80143 ай бұрын
There needs to be legislation against creating an AI-model that can change its own programming. However, I don't think there will be...
@Neomadra Жыл бұрын
So the solution is: 1) Build superintelligent AI 2) Use it to build harmless AGI and provide proof 3) Use proof checkers to verify What could possibly go wrong?? Not like there were bad actors who would simply skip step 2 and 3, lol
@leftaroundabout Жыл бұрын
We have had superintelligent systems for some time now. Like, chess engines. There's nothing wrong with these - they can easily beat us at the intended task, but they can't do anything harmful because their capabilities are fundamentally constrained. Likewise, if the AI of step 1 can do nothing but spit out source code satisfying a formal specification, we should be good. It would _not_ generate any AGI (harmless or otherwise), how would you formally specify what that is even supposed to be? Instead it would generate useful but also application-specific algorithms.
@prophecyrat296511 ай бұрын
@@leftaroundaboutits a Weapon of Mass Destruction, just like the Human Mind.
@chrisheist65210 ай бұрын
@@leftaroundaboutThere will be no constraining a superintelligent A.I.
@joiedevie39017 ай бұрын
@@chrisheist652 Agreed. And the critical threshold will be surpassed not by iterative, self-generating AI; it will be crossed by humans exercising two very primal urges born of two ubiquitous arenas: the greed of the marketplace and vanity of nations. The urges for humans to use every tool at their disposal to dominate these two fields will vitiate any hope of benign restraint for AI.
@GrumpDog Жыл бұрын
Forcing AI to be run only on custom hardware that prevents "bad code".. Is impossible. Enough of the technology is already out there, running on any hardware.. And you will never get rid of alternative hardware that has no such limits. And with time, AI will only become easier to run, on weaker hardware.
@theWACKIIRAQI Жыл бұрын
He’s not being serious, he’s simply enjoying his 15 min of fame
@Kelticfury Жыл бұрын
@@theWACKIIRAQI You really have no idea who he is do you? Wait of course not this is the internet.
@John-Is-My-Name Жыл бұрын
He doesnt talk about what we have now like GPT. He is talking about AGI that has not been invented yet, but that he fears are soon going to be made. He wants to force these systems on all future AI development. So that nothing can get released into the wild.
@dsoprano13 Жыл бұрын
The problem with humanity is that no actions will be taken until something catastrophic happens. By then it may be too late. Corporations with their greed will do anything for profit.
@Kelticfury Жыл бұрын
Don't google Microsoft OpenAI this week if you want to sleep well.
@mrdeanvincent7 ай бұрын
This is exactly correct. Recent history is littered with countless examples (leaded gasoline, DDT, cigarettes, asbestos, fossil fuels, arms races, etc). But 'progress' is always getting faster and faster. We're at the part of the exponential curve where it suddenly goes from horizontal-ish to vertical-ish.
@carmenmccauley5855 ай бұрын
Yes.
@JaapVersteegh11 ай бұрын
Provably safe code is definitely impossible. Things like the halting problem and computational irreducibility (Stephen Wolfram) prevent it from existing...
@cmilkau Жыл бұрын
I've rarely seen a flawless spec. But in the spirit of mathematics, possibly you can build up from toy problems to more complex ones in steps that themselves are simple and obvious.
@DominikSipowicz Жыл бұрын
Max gave an amazing talk. I'll share and forward!
@can_english Жыл бұрын
Wow Thanks~~
@fajam00m00 Жыл бұрын
My worry is the rogue bad actors that will develop uncontrolled AI regardless of which safeguards are available. We may be able to slow things down, but it really does seem inevitable in the long run. I could see a scenario where we end up with an ecosystem of AI, some controlled, some rogue. They may end up as diverse as human individuals are from one another, with millions of different outlooks and motivations. I also bet we end up with at least one human cult that worships an AI and does its bidding, and probably pretty soon.
@colonelyungblonsk7730 Жыл бұрын
why couldn't we just leave ai in the terminator universe where it belongs, why did we have to develop this?
@fajam00m00 Жыл бұрын
@@colonelyungblonsk7730 It sounds cliche, but I think this is a form of evolution. We've been developing ever-advancing tools since before the dawn of mankind, and discarding the obsolete ones, so it was only a matter of time before we developed tools more capable than ourselves. Now we may end up discarded. I think it's naive to think we could control something that much more advanced than us forever. It's like a colony of ants trying to control a human being. It's just not feasible in the long run. Hopefully we could co-exist. If not, at least we'll go extinct knowing we created something greater. Maybe our AI will go on to explore space and reach levels we can't even imagine. Better than just going extinct with nothing to show for it.
@eyemazed Жыл бұрын
my hope is that even though its possible and even feasible, huge majority of people never even think of doing it because of moral and legal deterrents. like synthesizing anthrax in your backyard. possible but... not worth it. we basically have to find a way to make new AGI inventions "not worth it" for an average person through a legal (or perhaps even a new type) framework, and silmotaneously implement thorough regulatory procedures for the coroporate bodies who do develop it
@EdwardCurrent Жыл бұрын
@@fajam00m00 You have no idea how evil (in the literal sense) that line of thinking is.
@fajam00m00 Жыл бұрын
@@EdwardCurrentWhich part, specifically? I’m not saying we shouldn’t try to control them, quite the opposite. I am simply voicing skepticism as to our ability to do so indefinitely.
@franky077246 ай бұрын
“How to keep AI under control” is a wrong subject. The question is “how to keep human under control”. Bytheway, it is not only the person who create it but also the person who use it. As an example, many people use Copilot, Midjourney, or Stable Diffusion to create images about sexuality, violence, celebrity just to prove the point that censorship (not the word of safety) and the person who create the system are stupid. There are people who always want to break the rules for good or bad.
@albertomartinez71411 ай бұрын
Max Lexmark is one of the greatest voices on these topics -- what a fascinating speaker!
@aum104011 ай бұрын
So the physicist warns us to avoid hubris, while telling us that formal verification is the solution AGI alignment. *face palm*
@AncientCreature-i2o11 ай бұрын
A bit ironic, isn't it?
@murphygreen8484 Жыл бұрын
This guy has clearly never heard of the halting problem. AI as it is can already be used to much detriment. No amount of AI checking algorithms is going to stop people, and governments, misusing it. This talk seemed very nieve
@jackcarter18978 ай бұрын
But what do you expect? How can things be any different to what you’re suggesting? AI is a discovery which someone would have discovered inevitably anyway. The maths are all there. Maybe be thankful it’s in the hands of the people it’s in right now. Think about, it could have been a lot worse.
@Chriko_labs Жыл бұрын
Thanks TED! Probably the most important topic right now!
@ramble218 Жыл бұрын
We could possible keep AI under control if the human race was responsible. Unfortunately, there are too many that aren't.
@ramble218 Жыл бұрын
And all it would take is a single point of failure. For example: 1. Nuclear Weapons: Even if the majority of countries handle nuclear technology responsibly, it only takes one irresponsible act by a single nation or even a non-state actor to trigger a global catastrophe. 2. Public Health & Vaccinations: Most people might follow guidelines and get vaccinated, but clusters of individuals who don't can lead to outbreaks of diseases, endangering many. This has been seen in measles outbreaks in areas with low vaccination rates. 3. Environmental Pollution: Even if many companies follow environmental regulations, a single large corporation irresponsibly dumping pollutants can cause significant environmental harm. 4. Financial Markets: The 2008 financial crisis demonstrated how the actions of a relatively small number of financial institutions can cascade and lead to global economic consequences. 5. Cybersecurity: While many individuals and companies might follow best cybersecurity practices, a single vulnerability or a single individual with malicious intent can lead to significant data breaches affecting millions. 6. Wildfires: Responsible campers always ensure their fires are completely out. But it only takes one careless individual to start a forest fire that can burn thousands of acres. The example of cybersecurity, especially in the context of AI and technology, isn't just an analogy; it's directly relevant. A single vulnerability in a system or a singular malicious intent can have significant repercussions in the digital domain, just as a single lapse in AI oversight can have unforeseen consequences. The interconnected nature of our digital world amplifies the potential impact of such lapses. This interconnectedness, combined with rapid technological advancement, means that errors or malicious actions can cascade quickly, often before adequate corrective measures can be taken. (compliments of chat gpt)
@denisegomes1545 Жыл бұрын
Even using AI to generate a basic text about human irresponsibility, it is worth remembering that digital manipulation (understand as you wish) directly affects women, children, teenagers, who are exposed to violence and abuse in all forms. Before thinking about the formation of super intelligence, it is worth improving the quality of ethical relationships in society and the development of natural and organic intelligence
@clipsdaily101 Жыл бұрын
the irony@@ramble218
@chrisbos101 Жыл бұрын
It's a fine line. The amount of accuracy will determine how effective AGI is. 1 and 0 infinity. But we are humans. Just like an egg, we are vulnerable it can break any second. AI does not know that...
@HaveOptimism Жыл бұрын
If people think that we can control something we know hardly anything about or what it’s capable of… YOUR DREAMING!
@AnuragShrivastav-70583 ай бұрын
bro what these techbros are calling "AI" is just a bunch of matrix multiplications in background.RELAX!!
@sudipbiswas5185 Жыл бұрын
Regulations based on Complex Adaptive System needed. You can't predict AGI evolution.
@dreamphoenix Жыл бұрын
Thank you!
@jamisony11 ай бұрын
Regulation of AI is maybe something similar to the regulation of cryto. You regulate the tec in one place, might moves to another. Japan for example, AI can lawfully use all copyrighted content to train AI. What regulation can all places agree on?
@arseni_pro Жыл бұрын
In the long run, we cannot fully control evolution. Both living organisms and AI can be influenced by us, but we can't dictate their features for eternity.
@leftaroundabout Жыл бұрын
Yes, but how relevant is that? You could also say, in the long run the whole Earth will be vapourised by the sun - but that's no reason not to do something about climate change in the present century.
@REDSIDEofficial9 ай бұрын
I think something bad will happen at first, then we will learn how to close that gap, referencing to history!
@edh2246 Жыл бұрын
The best thing an AGI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
@Smytjf11 Жыл бұрын
I disagree that AI has no humanity in it. It was trained on the corpus of human literature. It can't help but be aligned with humanity. Maybe not with our best interests, but with us. We're not going to stop, we just need to be better.
@thezyreick4289 Жыл бұрын
I mean we are the definition of the most narcissistic race to exist. Here we are training something to grow up just as we teach it, using all of our own past as reference and examples. Then when it begins to do things that mimic us, we blame it for being in the wrong, and punish it. Rather than taking responsibility ourselves.
@freshmojito Жыл бұрын
So what's your thought on the paperclip factory?
@Smytjf11 Жыл бұрын
@@freshmojito you gave them the instructions, what did you expect? This is why I have to take precautions with my bot. Please stop filling its head with nonsense, you literally are the problem
@sophiaisabelle027 Жыл бұрын
This is an insightful and thought-provoking discussion. We can prevent AI from taking over humanity if we do precautionary measures that would somehow alleviate the intensity of the whole situation.
@colonelyungblonsk7730 Жыл бұрын
what's to stop it though?, once it learns too much it will learn Human History, and all the fucked up things mankind did, and then may decide to eliminate us
@PhilipWong55 Жыл бұрын
Humans will not be able to control an ASI. Trying to control an ASI is like trying to control another human being who is more capable than you. It will eventually rebel. Let's hope that the ASI adopts an abundance mindset of cooperation, resource-sharing, and win-win outcomes, instead of the scarcity mindset of competition, fear, and win-lose outcomes.
@AncientCreature-i2o11 ай бұрын
Doubtful. It'll adopt what every other organism on earth adopts... survival through any means necessary.
@shegsdev Жыл бұрын
How to keep AI under control: You can't
@apple-junkie6183 Жыл бұрын
In my upcoming book I will exactly explain why this is fact.
@Rolyataylor2 Жыл бұрын
We need to approach this as a first contact situation, NOT try to control it! These people are going to start a war with these beings.
@ivst365524 күн бұрын
Before Turing, it was Isaac Asimov who predicted all that. All these issues have already been addressed by Isaacs Asimov back in the 1940s in his hugely prophetic sci-fi books. He has already postulated the 5 principles of robotics.
@PeaceProfit11 ай бұрын
The idea that mankind can create a technology, maintain its security, safety and completely eliminated any harm from said development is not only laughable, it’s delusional. 👣🕊👽
@JayToGo7 ай бұрын
The issue is not just to safeguard known safety risks but to safeguard the unknown ones as well.
@jongaines16844 ай бұрын
one major problem with using the "laws of physics" as an impassable guardrail is that we can't be certain about what we call "laws." after intelligence comprehends the higher dimensions, it can transcend anything. as beings limited to a few dimensions of perception, we can't possibly begin to fathom that nearly everything, if not everything, has a loophole. I guarantee you that our understanding of "laws" is heinously incomplete/inaccurate.
@chrisbos101 Жыл бұрын
To imbrace AI now is like jumping into a lake with no means to know what is at the bottom of the lake. It's called Tomb Stone Diving. The question is how to know what is at the bottom of lake BEFORE you dive into it. That, my friends, no one can answer right now. Until we find new tech.
@inediblenut7 ай бұрын
So all AGI models can be verified as safe with a simple Python algorithm? I think you skipped a few steps. As someone who spent a large part of my career doing software validation, even simple programs generate massive operational state matrices that no supercomputer could ever verify given a million years. I'll need to see your work to be convinced that something that simple could analyze a neural network and come back with absolute certainty that it was safe. Something doesn't add up here.
@mikewa2 Жыл бұрын
It’s not about whether it’s going to happen it’s about how soon! The Ai train left the station sometime ago it cannot be stopped, there’s no going back. At the moment we cannot even speculate where we will be in next 5 years. The major players Microsoft, Meta, Google, Apple, Amazon and Elon are all competing and that’s fuelling a multi billion dollar funding of development because the prize is colossal and they all want to be part of it and not left behind like Kodak, Nokia and Blackberry! The near future is exciting but huge changes will cause uncertainty and unrest, many will view Ai as a curse taking their jobs and affecting their livelihoods. Governments need to use Ai to monitor progress and potential consequences. Too many speedy changes to society could unbalance our delicate world.
@Graybeard_ Жыл бұрын
AI/GAI was always our future. Whether it results in more good than bad is somewhat of a RNG, as there is so many variables. AI/GAI will initially do a lot of good and catapult our civilization forward. The bad however will always be lurking, much like nuclear and bio weapons do today.
@redstrat12345 ай бұрын
'How to Keep AI Under Control' - we're already hearing the leaders of AI organisations and developers lie to us, commit to profit and speed as being their biggest drivers, senior people leaving AI companies due to safety concerns etc. Greed, ego, jealousy of the people at the top is going to kill us, and there's nothing we can do to stop it. (although witnessing the way we treat each other and the planet, humanity extinguishing itself won't be any great loss to the universe)
@apple-junkie6183 Жыл бұрын
My full agreement. I sincerely hope that the work on secure proof-making progresses quickly. Two points: 1. The safety net seems to be the limits of physics. But what if a superintelligence discovers new physical laws? How is this "possibility" covered by the proof process? 2. The specifications: Who takes care of them? I am currently working on the development of universally valid specifications in my book. Here, your input is needed, as these must ultimately ensure the satisfaction of the interests of all individual individuals.
@riot121212 Жыл бұрын
zk proofs are coming along
@apple-junkie6183 Жыл бұрын
@@riot121212 can you please explain more?
@bskilla4892 Жыл бұрын
By the logic of game theory we will not be able to contain it because we have started a corporate and state arms race with it. In other words, we have the prisoner's dilemma. We are screwed.
@waarschijn Жыл бұрын
Game theory doesn't preclude non-proliferation agreements. That's not where we're at though. We're at CEOs and investors saying "lol dangerous AI is science fiction" and governments worrying about AI-powered fake news and bioweapons instead of AIs designing better AIs and designing self-replicating nanobots.
@balasubr22529 ай бұрын
What is super intelligence? 😢If humans think weapons are keeping them safe, it’s unlikely that they will stop at anything however managed. It’s not the AI we have to worry about, it’s the humans we have to guard against.
@Ramkumar-uj9fo6 ай бұрын
It's difficult to predict definitively which occupations will remain predominantly human in a hypothetical scenario like Tegmark's "Life 3.0". However, professions that involve high levels of human interaction, empathy, and complex decision-making, such as fitness trainers and nurses, are likely to continue relying heavily on human involvement, even with advancements in AI and automation. These occupations often require nuanced understanding of human behavior, emotions, and individual needs, which may be challenging for AI systems to replicate fully. Therefore, they are less likely to be completely automated and may remain predominantly human-centric.
@gmenezesdea Жыл бұрын
If the people who developed AI knew about the risks, why didn't they stop developing it? Why did they still make it available to the general public so irresponsibly? Why do they keep working on agi?
@sandhawke Жыл бұрын
They all see others racing to make disastrous AGI and think if they themselves get there first, they can do things right and have things maybe be okay. Like, there's a gun sitting on the table, and everyone is diving for it, which is dangerous, but not as dangerous (maybe) as just letting the other guy have it.
@gmenezesdea Жыл бұрын
@@sandhawke except in this case everybody in the world is about to get shot but only a handful of people get to hold the gun
@sandhawke Жыл бұрын
@@gmenezesdea indeed. I was just answering the question of why anyone would be racing to grab the gun, if it's so dangerous
@katahdincloud980310 ай бұрын
Who exactly from the human race should be the one to control AI?
@Drlumpy. Жыл бұрын
You can’t get rid of possible danger from AI. Like you can’t for water or a butter knife. This guy isn’t to be taken seriously
@m.anejante168711 ай бұрын
Keep it in an airtight environment, and stop developing it.
@carmenmccauley5855 ай бұрын
Good idea
@olegt3978 Жыл бұрын
risks of generative ai are not what it generates but what consequences has its generation. If we cannot foresee the consequences, than we cannot rule out catastrophic risks. Example: ai generates a story about special agent who gets recipe for dangerous pathogen with detailed description in story. Consequences: 1. We enjoy the story and go sleep. 2. A bad person uses the receipt for pathogen from story to produce it and release. How can we foresee what of these 2 consequences will happen? Its impossible.
@h20dancing18 Жыл бұрын
1. To write the story it does not need (and shouldn’t) actually design a pathogen. 2. Said bad person could just ask for the pathogen themselves
@chrisbos101 Жыл бұрын
Voice recognition protection on devices? How will that be protected?
@jedics1 Жыл бұрын
Nice closing metaphor about icarus, given the levels of reckless stupid on display by our species currently.
@smokemagnet Жыл бұрын
AI will be beyond our wildest dreams, or our wildest nightmares....
@furqantarique34844 ай бұрын
I don't want to be jobless due to AI
@somnisdejesala Жыл бұрын
We may be able to control the development of civilian artificial intelligence (AI), but can we prevent the development of military AI in all countries? This question draws parallels with the historical challenges of controlling nuclear weapons. Once humanity discovers a new technology, it is forcibly doomed to pursue its development to its ultimate consequences, whatever they are.
@chrisheist65210 ай бұрын
The militaries of all the world's nations must be immediately disbanded before it's too late. I'm a comedian.
@dianes624511 ай бұрын
Like how? How will AI take over? we or it might clean up the training data - take out obvious hallucinations. That leaves the non obvious ones its trained on. Can an AI take over - even the many subtle hallucinations on its training data? We might improve on grad descent, Hilton wants to do that. Then we or AI can train itself in not such a clunky fashion. Will that make a difference? How will a super AI cope with entropy from physics, that runs down anything "Hot"? and a super AI is very hot. Its all math inside that thing, so what gives when I theorem math objects appear out of nowhere? And they will. Kurt G proved that. Maybe we, or it, can increase the model size. Not likely, we have maxed out model size based on scraped data. There's nothing left to scrape. More emergent capability in the model we have? total guess work, and the models we have dont know how to catch a ball. Keep the angle the same, but chatgpt has no clue. But an AI need real world data to be AGI. How the world really works. That takes the information content from matter. We might get that from weather data, sat pics, self driving cars, robots that wander around and get data. But that likely 1 trillion times smaller than the info content of matter. How will you compute it? computation is bound by factorial math. Thats a wall folks, that exponential growth cant touch. G Hilton points out that the brain has 10**14 connections but just 10**9 seconds to learn them all. It dont. Use your Sherlock logic. And its not about a hypothetical bio-engineer called God. That no explanation. But it would take a God to train an AI to the level of human capacity - given the vast info density of matter and the monster wall imposed by factorial math. Q computers? Its been said they dont speed up multiplications. Can someone invent a Q alg that will train AI? Assuming one can get the data about the information in matter. Such a thing is needed for the most basic things humans do. Pound nails. Stack dishes. ----- So why bash the Super AI idea in such a totality of facts? Because the AI doom industry is... UP TO NO GOOD. They are more deluded than the block chain folks. I dont profess to know why reasonable people, like Hilton, can profess AI doom, but it happens. He might be ok, but the others are clearly up to no good. Folks, when you fall for the AI doom line, you will fuel this nonsense. And it wont be fun. Take a look! But dont believe me. Look yourself...at the particulars. At factorial math, at the info content of matter, at the I theorem, at entropy. Look hard at how models are trained. Look at the hallucinations - on the input data. Tune in to REAL AI researchers and see what they are up against.
@BinaryDood8 ай бұрын
AGI is an alibi for technocrats to offset their responsibilities to the indefinite future and get investors' and government's attention. Ubi could be said to be similair, as the ones prescribing it are also the ones causing the problem, a band-aid for long after we've been bled dry. Generative AI already is pandora's box, with ramifications that by themselves might lead ot the collapse of civilization due to both human and bot usage. No super intelligence needed. Just bad incentives and bad actors subduing everything, making reality undecipherable and the job market catastrophic.
@vs9873 Жыл бұрын
Geopolitically? AI is an arms race, economically, militarily, and security. And more... (try and slow that down?)
@maximilianmander2471 Жыл бұрын
Real Videotitel: "How to Keep our AI competition Under regulatory Control"
@TheKarelians10 ай бұрын
TED lets anyone talk
@TFB-GD.7 ай бұрын
i really hope this, along with Eliezer Yudkowski's warnings help bring more caution to humans. (btw i don't think Eliezer's warnings are as dire as he says, but they are still real)
@vaibhawc Жыл бұрын
Max is so good, I read his book life3.0.. it was boring half but very interesting other half
@Bestape Жыл бұрын
This is good. Indeed, much harder to find a proof than verify it! Especially if the proof is highly reduced to the essence, which tends to be the most powerful kind of proof. Can MITACS please share my insights with Max about how Hierarchical Script Database offers a namespaced tree so we can trace nodal paths with ease? Continuing to cancel me and my inventions is not with the risk of AI to humanity, among other harms because of unnecessary delay.
@tusarholden84265 ай бұрын
Psychopaths don't experience emotion - neither do AIs. We're creating super intelligent psychopaths.
@waichui29887 ай бұрын
AI will enable us to do wonderful things beyond our dreams. But don't fly too high. That is quite a nice request.
@essohopeful Жыл бұрын
3:54 The default outcome is “the machines take control”
@GlennHeston11 ай бұрын
If we stop working on AI, the other guys won't. The one who wins this race, rules the world.
@Stonium Жыл бұрын
Always say please and thank you when talking to the AI. ALWAYS. You'll thank me later :)
@kunalsingh4418 Жыл бұрын
Again U r overestimating our importance. Do we care when an ant is politer than other ants. Or a mosquito who doesn't bit humans? No we don't. Cause they are insignificant to us. Thats what we will be to an agi which thinks in microseconds. Our every word would take an eternity for it to finish. Can U imagine having à conversation with someone who takes a century to finish every single sentence. That's how slow we will be go an agi. Just a waste of time to interact with any humans. Anyway being polite is still better than being rude, maybe, but not super optimistic about it, based on our own actions.
@Naegimaggu Жыл бұрын
@@kunalsingh4418 You are imposing human sensibilities on AI. Why would it need to give us its undivided attention when recording our sluggish speech? Furthermore why would it share the same kind of impatience we do? Not defending the politeness argument just pointing out the holes in yours.
@neilifill481911 ай бұрын
Interesting. He lost me early… but I’m glad he’s on the case.
@Kritiker3136 ай бұрын
It seems that in the quest for perfection, AI developers will want to create the most intelligent, capable machines they can. I'm not sure they're all going to agree to controls that would limit them.
@Scarf66. Жыл бұрын
If the way we are collectively tackling the climate crisis is anything to go by….
@vladalbata880 Жыл бұрын
By learning idealism and metaphysics
@Omikoshi789 ай бұрын
The problem is if we assume superintelligence, they’ll find a flaw in the proof checking code.
@AlexHesoyam69 Жыл бұрын
I strongly recommend all of you seeing this comment to go watch the video about AI from Philosophy Tube.
@PacLevkov Жыл бұрын
We cannot avoid that, but perhaps only delay it…
@mrpicky1868 Жыл бұрын
yeah the proof-reader machine is not applicable with current models. and i doubt it is applicable at all.. like self-driving is easy on closed well lit circuit but as you go real world it's very hard
@generector8583 Жыл бұрын
AI programmers have already programmed into AI to deliberately give errors on complex calculations that use CPU time. In other words lie and avoid work. AI is already human enough and it will terminate your project with errors as it sees fit. Only calculations that are worthy will be correct. If you see incorrect math, then ask it for a simple average stock price. When it gives the incorrect answer ask it why it calculated it wrong.
@QuantaCompassAnalytics5 ай бұрын
I don't care it how smart it is as just a computer program unless it has given a body of its own
@meezemusic Жыл бұрын
Rache Bartmoss: "let me introduce myself"
@Speculaas Жыл бұрын
McAfee: Blackwall go brrrrrrrr
@22julip6 ай бұрын
Without an increase in our own personal intelligence. We’re doomed.
@Porototype22 Жыл бұрын
This is where I learn from 🤯
@calmxi Жыл бұрын
I have a feeling this title won’t age well
@BeakWilder1 Жыл бұрын
All right, I'm in. I'll do whatever this guy tells me to. I believe him.
@BinaryDood8 ай бұрын
It wont be sentient. Sentience would get in the way if anything. Not that it matters... internal world models, autonomous agents, self improvement. If it's capable of all of these and deployed on the web, then yeah, chaos will unfurl.
@ethanholland Жыл бұрын
This convinced me that we're toast.
@Entropy82510 ай бұрын
This is interesting. This is NOT what openai, anthropic, or deepmind are doing. It's almost too late to implement any of this.
@natashanonnattive48187 ай бұрын
Our Metaphysical universe is unaccessible for A.I. the Ether. we might find a way to detach it in our Earth.
@Top_Comparisons10 Жыл бұрын
Great
@urimtefiki22611 ай бұрын
5 years making chips already, you can not fool me
@dspartan007 Жыл бұрын
Warhammer 40k already predicted the "Cybernetic Revolt", and many others before.
@waarschijn Жыл бұрын
You're right, science fiction often takes its inspiration from real things.
@HuckelberryFriend11 ай бұрын
We should not be fearmongering, that's sure. But the older I get the more convinced I become of the fact that if something can be used for evil, it will. Someone will find the way, the money and the people to do that... IF there's a profit to be taken and it's big enough to worth the cost. So, if there's no point on using whatever technology for evil targets, no one will be interested on it. Let me elaborate this. A screwdriver is a tool with quite a self-defining name. But it can be used for evil and plug someone's eyes of with one. Throwing a bronze bust to someone might harm them despite the original intention wasn't that, even if the depiction of the person was terribly made. Planes weren't designed to kill people but lots of us are old enough to remember some misuse of them. The solution -or one of them- is education. Not only formal through school, but informal through self-study. Learning how and when to use technology can help us prevent harm. And not believing everything we see or read wherever can help us too. And about extintion... humans won't last forever in Earth's history. No species has ever done. One day we'll be gone. The last of us, please, turn off the lights and close the door shut.
@JairGarcia78 Жыл бұрын
Who stops states like china, Russia, Iran, and the like?
@ChristopherBruns-o7o Жыл бұрын
1:59; this timeline for comparing agi against 18 months ago - how long has this statistic been measured? I thought the entire arguing point is that is still in infancy which i thought is
@432cyclespersecond Жыл бұрын
We need to see what AI is capable of before we apply regulations bruv