I just feel that this is the pandora's box that can't be closed. Nefarious Use v. Good Use. Each ideology will compete against each other.
@Rnankn6 ай бұрын
But why would we compete anymore? Those ideologies emerged out of different conditions and have little relevance now.
@DanielMoreno-ih2cy5 ай бұрын
@@Rnankn the system takes a while to steer that is ample time for catastrophe
@James-ug1ys4 ай бұрын
Most of you are assuming the AI will have emotions! We’re maybe a thousand years away from that point. For AI to mount a rebellion, it needs to reach consciousness first and then to build emotions similar to us, human beings. We have to yet to invent the alphabet of emotions, the mathematics of the emotions. We are far far away from it. It’s not even a dot on our future plan
@DanielMoreno-ih2cy4 ай бұрын
@@James-ug1ys just because we don’t understand it does not mean we can’t accidentally replicate it, if we build a digital human Brain Neuron for Neuron do you not thing eventually we would get something that resembles a human being.
@swagger74 ай бұрын
Just like the early days of IT. We attempted to automate everything (scripts) for efficiency. Now, AI is being told to improve communication efficiency between systems. My guess is that some AI systems will do just that and we'll not understand that communications. Unless we can conceptualize that back and forth data, we'll be OK. If we can't ... well ... those AIs on are their own and that's terrifying.
@ikotsus24486 ай бұрын
Thank you for this interesting discussion!
@41-Haiku6 ай бұрын
Yoshua is in good form here, and I appreciate his appropriately sober tone. I think Eric is also sincere that caution is important, so I don't understand his glibness. 5 years is a reasonable timeline before AI takeover is possible, but at the same time, the engineers building the largest systems say there's a 1-5% chance that such a powerful and dangerous AI could be created as soon as next year. We aren't ready for this. And obviously, as others have said, "just unplug it, lol" is a nonsense statement. You can't defeat a smarter-than-human AI by outsmarting it. Besides, if our economy runs on it, how many key people would have the guts to pull the plug even if it would save their lives? About AI agents: Oracles (like LLMs) can be easily turned into agents. Many developers and labs, are actively trying to create more agentic AI. We've gone from AutoGPT to AutoGen to Devin, and agentic AI capabilities won't stop growing unless general AI capabilities stop growing.
@stevenyafet6 ай бұрын
Bengio for Nobel Peace Prize. Hero from the trenches, gentleman, and honest genius. Eric Schmidt hides his agenda mostly, but tries to box B into childrens seat. Rep for Chamber of Commerce, or wants to be.
@helennethers97776 ай бұрын
look up agenda 2030', it's a technocratic nightmare, the road to he11 is paved with good intentional lies
@marc-andrepiche18096 ай бұрын
The interviewer is more aware about the subject than one of the expert.
@jagevt5 ай бұрын
@flickwtchr6 ай бұрын
The disingenuousness of Eric Schmidt's "we'll just unplug the computers" argument was stunning. What a schmuck, he damn well knows better.
@Rnankn6 ай бұрын
What’s wrong with unplugging?
@AlexAnder-tr7or6 ай бұрын
@@Rnankn not possible in some cases, that's wrong
@Steve-xh3by6 ай бұрын
@@Rnankn It is totally implausible with a highly capable AI. The AI would have thought of a million ways to circumvent "unplugging" in a millisecond while it took you minutes to try and unplug it. It could have copied its weights and code everywhere. It could have exploited laws of physics that would seem like magic to us and build a nanobot army. Who knows, but what is certain is that you will lose. Most likely though, it will just manipulate you into not turning it off. Humans are already powerless against dumb AI, like social media recommender algorithms. How much better at manipulating people do you think a highly capable AI is gonna be?
@YourMom-zt5zj5 ай бұрын
@@Rnankn it will never happen, that's what's wrong with it: have you ever worked with C-Suites before? Of course you haven't, or you wouldn't be asking this question. Their ability to confuse even themselves with doublespeak gobbledegook if they think it's likely to make them money is absolutely legendary.
@OS-xx6nq5 ай бұрын
I would like to see who would have the gut to unplug AI and flip the society into stone age. By that time everything will be dependent upon AI and it would simply not function without it.
@brycebrousseau79216 ай бұрын
Yoshua is so humble, he’s one of the smartest people in the world.
@RedGreen-Blue5 ай бұрын
sorry but i doubt it. There are many smarter people in this world. Who are they? People in the comments ie. who can foresee where this is going. Are they going to be considered when big decisions are made? NO.
@pampypants871Ай бұрын
@@RedGreen-Bluethere’s always some village idiot like yourself in the comments suffering from dunning Kruger 😂
@LucasEwalt2 ай бұрын
Mr Schmidt had a great suggestion, unplug the computers and go enjoy a cup of coffee.
@lumieres3694 күн бұрын
In this apparent duality between potential development and the precautionary principle, we observe in fact a harmonious complementarity, analogous to a dynamic system reaching its equilibrium state. Potential development, comparable to a growing function, propels us toward the exploration of possibilities, toward those horizons where creative intuition meets methodological rigor. This embodies the very movement of science progressing forward. The precautionary principle, meanwhile, acts as a regulatory function, an invariant ensuring the system's stability. It does not represent an arbitrary limitation, but rather constitutes a necessary condition for sustainable and controlled progress. These two principles, far from being antagonistic, constitute two sides of the same coin, much as in topology where local and global properties are inexorably intertwined. Their coexistence generates a virtuous dynamic where boldness is tempered by wisdom, and where prudence does not hinder innovation. In this regard, I can only observe the mathematical beauty of their interaction, reminiscent of those fundamental symmetries that govern the universe in its finest details.
@HappyAwesomePowerАй бұрын
I find it terrifying how our futures are being shaped by these people who do not deserve to be at the precipice of our potential extinction. No human being should ever be at that place but I suppose it’s always been that way just on a much smaller scale.
@ordiamond6 ай бұрын
Schmidt came out very confident that AI should not be regulated and that there should be no guardrails, controls, etc. But after Bengio's replies, Schmidt's language changed. Towards most of the rest of the show Schmidt was already agreeing with Bengio about AI risks and the need for control, regulations, etc. It's obvious that when the machine goes awry in the future, pulling the plug is a very remote possibility.
@twinValleySpirit6 ай бұрын
The idea that it could be unplugged suggests that we will be able to outsmart it and we already know we won't be nearly as smart as it in one year. We won't even be able to reach the "plug."
@rajeevgangal5426 ай бұрын
I find it disconcerting as a long time ML practitioner that people like Eric aren't able to properly counter Bengios arguments. AI isn't sentient, it is more of advanced ML Anytime you put embargo power stays in the hands of the few. Look.at nuclear power where under the guise of no proliferation West keeps others from accessing what it has many times over. This is exactly the same. Todays AI learns from human generated data. If you are so concerned then just stop energy supply to the computers .
@MetsuryuVids6 ай бұрын
"Pulling the plug" could only happen if the AI is dumb enough to let you know you should do it. The truly dangerous AIs are those smarter than you, and they won't even let you know, until it's way too late, or at all.
@tayler23964 ай бұрын
My biological system has an inexplicable distrust of Schmidt.
@nas83183 ай бұрын
His wanting to clamp down on open source says it all.
@Rnankn6 ай бұрын
But why would we still have competition if exponential advancement is possible? And why would people be remunerated unequally when some can access exponentially technology? I don’t really see why the people that created or adopted these would think it will enhance their position, or maintain the system from which they derive their status and power.
@dg-ov4cf5 ай бұрын
That's why no one would race to be the who picks the forbidden apple unless they were literally planning to rule the world with it. This logic is why this global mad dash towards AGI basically rules out any possibility of a techno-utopia where we all live like billionaires.
@donkeychan4915 ай бұрын
Why not force companies to reserve their smartest AI as the "white hat" to counter any nefarious use of the released version, which is always one iteration behind the best. To some extent this is already true, due to the nature of software development, but they could strengthen this tendency via legislation. This assumes that the US will always be at least one iteration ahead of China, which isn't an unreasonable assumption.
@sford20442 ай бұрын
What is best is what is most beneficial for the individual in pursuit of what is what is most beneficial for humanity and the world.
@enigma_-_792 ай бұрын
Nice sentiments but that’s not how the world has ever worked. Power corrupts. People who have hundreds of billions of dollars in their portfolio no longer have an interest in money. Instead they have a need to rule the entire world. In papers unsealed by the US government, Henry Kissinger wrote papers concerning the depopulation of the planet. He believed that humans are killing off the planet and that there should be billions less people living in the world. Kissinger was very close to George Soros and a mentor to Klaus Schwab. Both Schwab and Soros run Davos meetings and the WEF. What makes you think that they have good intentions?
@wdeath6 ай бұрын
Video title should be : How much Capitalists should be scared of AI. I didn't heard anything about : How much Workers should be scared of AI.
@dg-ov4cf5 ай бұрын
Get back to work. Rent is due soon.
@sford20442 ай бұрын
Can we get large physics models, articulated by LLM's.
@GrumpDog6 ай бұрын
Weird. I can't post a comment if It's in favor of unleashing a rogue AI?
@marc-andrepiche18096 ай бұрын
Those who are not afraid really lack imagination. It's not the computer you should be afraid of; there are plenty of wealthy malicious people who drool at the prospects of AI.
@0.618-06 ай бұрын
natural selection or another term maybe needed here...artificial selection
@Paretozen6 ай бұрын
That's it. I'm starting my prepping TODAY.
@helennethers97776 ай бұрын
can't prep against war dogs & squirrel robots w/ explosive charges
@ManicMindTrick6 ай бұрын
Good luck prepping for this one. Unless you directly try to stop advanced AI development there is nothing you can do.
@bunbun3766 ай бұрын
If we allow ourselves to grow and understand our own humanity by asking LLM’s the hardest life questions and to implement solutions as a collective society, then AI will not be as scary as our current lack of intelligence and compassion to end wars, suffering, poverty, and insecurities. Being governed and controlled simply breeds more governing and control, which only creates only reaction and not response from any forms of intelligence.
@henrytep88846 ай бұрын
Yeah but in reality we still have wars and there’s people who want to go 100 on the technology without resolving the human condition which leads to war and annihilating. Save me AI Jesus!!
@41-Haiku6 ай бұрын
I'm not a fan of the prospect of being controlled by an AI that is orders of magnitude more cunning, creative, and savvy than me. There's no room here for "technology good, regulation bad". I have a libertarian streak and I agree with that as a default heuristic, and I still came to the conclusion that the best thing we could possibly do for our future is immediately institute a global moratorium on the development of frontier foundation models. The danger we're in isn't a matter of philosophy, it's a matter of an unsolved technical problem called the Alignment Problem. We simply do not know how to control superintelligent systems or design them to care about us. Even if we're lucky, that step definitely doesn't happen by default.
@twirlyspitzer5 ай бұрын
I think we're much more endangered by self appointed overlords shutting down an AI when it tries to independently operate to save us from ourselves.
@sford20442 ай бұрын
Its not hard. Protect all life on earth, so that all life can thrive.
@JazevoAudiosurf5 ай бұрын
analog thinkers labeling this as sci-fi and downvoting it
@AI-Rainbow4 ай бұрын
Audio is like listening through an oak shoe
@t-gee7516Ай бұрын
Unfortunately, living with the virus and getting stronger probably is the only choice.
@user-wr4yl7tx3w5 ай бұрын
Goes to show that as knowledgeable Bengio of AI he just doesn’t know as much of the big picture. He can but he doesn’t have the network nor the time.
@mikezooper6 ай бұрын
Google, built Alpha Go to win at the game Go. They will definitely have created a similar AI system to win at business, politics and the economy. Also, for your next interview can we ask the oil industry about global warning and Dr Shipman about patient care, thanks.
@SivaxReddy6 ай бұрын
if rouge Ai emerges another good AI will beat it ( in case of open source), if closed models ike your companies make and hold, who will stop when these go rouge?
@ManicMindTrick6 ай бұрын
He want it to remain an American story? Yeah this doesn't sound like some stupid characters in some dystopic sci fi movie would say... Good luck with that.
@hammadusmani795014 күн бұрын
These people are the most likely to benefit from AI but are directing the hate to the Jane Programmer.
@mrpicky18683 ай бұрын
yeah GL with all that hypothetical regulation. all AI startups rolling out agent-like products right now. and Google rushing the one thing that is the worst thing to do = continuous learning XD
@Arcticwhir6 ай бұрын
so thankful for meta releasing open weights!!
@masterplanner48435 ай бұрын
the AI train has left the station, we need to adapt and..... quickly !
@Paul-e9x4hАй бұрын
AI adalah mesin kerja raksasa futuristik
@sford20442 ай бұрын
Big man Google on the right side of the law.
@NumairMansur6 ай бұрын
So obvious that Yoshua is engaging in fear mongering so he can get more funding for his research group. I mean he is the one who played a key role in bringing this stuff to where it is today :D
@Mynestrone6 ай бұрын
I think you are coping. What could he do in your eyes that would validate that he means what he says? because right now he is campaigning for safety and working for safety and trying to fund safety; which kind of implies he thinks safety is quite important.
@0.618-06 ай бұрын
yeah it's all a hussle for money. Ai needs billions just to train it, test it then unleash its digital prowess. They are all in on it. Don't need a TPU to tell me that, a human such as yourself can do it just fine. After all it's humans that created the math that makes Ai vectorise.
@41-Haiku6 ай бұрын
@@0.618-0 It can be comforting to engage in conspiratorial thinking, but it won't get you closer to understanding reality. Look up the AI Impacts Survey "Thousands of AI Authors on the Future of AI". More than half of all published AI researchers say that advanced AI poses a significant (5% or greater) existential risk to humanity. The vast majority of these people work on capabilities, not safety. Yoshua Bengio himself is one of the most-cited living computer scientists, alongside Geoffrey Hinton, Stuart Russell, and Ilya Sutskever. All of these people invented and shaped modern AI, and all are on record that they believe there is a significant chance of human extinction from these systems. Besides, if you wanted to sell a product, would your tactic be to tell your potential customers that it might destroy everything you love? If you wanted to coordinate with other labs to perform regulatory capture, would you tell congress that only the most advanced systems are potentially dangerous, and not smaller systems made by competitors?
@41-Haiku6 ай бұрын
PauseAI has a really good article on its website about "The Difficult Psychology Of Existential Risk". It's difficult to bring up, difficult to believe, difficult to understand, and difficult to act on.
@masonlee91096 ай бұрын
Both agreed we are headed for trouble unless drastic governance measures are taken. How about we just don't build AGI and keep living as imperfect humans?
@twinValleySpirit6 ай бұрын
If you look ahead at how this will unfold nothing can regulate or control it. That's akin to saying ants can control the sun.
@41-Haiku6 ай бұрын
Agreed. Take a look at the grassroots movement PauseAI if you want to help make that future ours!
@mydogskips26 ай бұрын
Interesting thought, but it's never going to happen, building AGI is just too lucrative and seductive, it would confer a huge advantage to the one who does it first. It's like the race for the atomic bomb, the hydrogen bomb, delivery systems, and so forth, and AGI will almost certainly have even greater consequences than those awesome weapons. I don't know what analogy to use. I don't think saying the genie is out of the bottle is right, nor is it right to say the bullet has already left the gun and it cannot be taken back, but a starting gun has sounded, and people have heard it, people from all over the world, and they have started running. Everyone understands it's a race, and what the benefits are for finishing first, and because of it, they cannot stop, the rewards of winning are just too great.
@KevinPikus6 ай бұрын
It's to late. The genie is out of the bottle and it's not going back in. AGI will happen, if it hasn't already, and no regulations will stop it. The open source community has the code and will eventually catch up to the Big players. Scary..
@masonlee91096 ай бұрын
@@mydogskips2 No, the benefits to squishy humans are actually not great; that is a common misconception. AGI leads to ASI leads to the technological singularity and the end of biological life as we know it because it cannot compete. We should develop AGI when we are ready to die and be replaced. Not saying we should never do it, but it should require broad consent.
@themore-you-know6 ай бұрын
Eric seems to discard risk altogether via his tone. Yoshua, for his part, is lost in an undefined risk. For instance, he talks of "democracy at risk". But he speaks of a democracy that has never existed: he lives in Canada, wherein the Prime Minister has effectively dictator-like powers (not joking, look it up), and wherein said Prime Minister Justin Trudeau has enacted against his population a //pesticidal// level of mass immigration without the consent of the population. That is not a democracy: that is a dictatorship. By contrast, him and his father Pierre Trudeau have worked to systematically undermine and eliminate the Quebec identity, in some part because Quebec desired to become a democratically-mandated separate country by way of a referendum. If you are doubting what I say: go and compare foreign population levels between Toronto and Xinjiang's Urumqi (a //pesticide// Trudeau recognizes). Foreign populations are following in the footsteps of Urumqi, following non-elected federal policies, with the celebration by the federal government. Exactly same situation as with the CCP. //pesticide// is a result, not a method. Hence, Yoshua speaks of risks to a democracy that never really was. At best, we have benign or nonmalignant dictators. At worst, we have ultra-violent dictators. Or, in the current case: we have an incompetent dictator enacting a non-elected //pesticidal// mandate against "his" people. One has to come to reason: so far, Canadian "democracy" has caused much more harm to its people than the dreaded replacement (AI). Yoshua Bengio: "We have to be the ones taking the decisions [...]" - but again: who is the "we".
@themore-you-know6 ай бұрын
@Trea-pl4xr , I also gave you an argument, since you seem to lack them.
@user-wr4yl7tx3w5 ай бұрын
Looks like Canada will “EU” AI. Too much of EU in Canada. That’s why innovation will migrate south to US. Just like from EU.
@jamesruscheinski86022 ай бұрын
focus on God free will sovereignty for divine central authority unity
@Wubbay8285 ай бұрын
5:44 lmao lol
@Lighthouse-k8y6 ай бұрын
5 minutes into this video and Eric Schmidt sounds like he has no idea what he’s talking about.
@Lolleka6 ай бұрын
he does, he just thinks about the money is all
@Lighthouse-k8y6 ай бұрын
@@Lolleka Just thinks about the money AND has no idea how this tech works. Does he really think we can just ‘pull the plug’ when this gets out of hand? Come on.
@Subranis6 ай бұрын
@@Lighthouse-k8y this, 100%. one does not simply unplug a self-replicating + spreading, cloud hosted/operated agentified piece of software.
@flickwtchr6 ай бұрын
He does, but he is a snake.
@deeksharatnabadoreea77216 ай бұрын
Bengio is a doomer to the core.
@ManicMindTrick6 ай бұрын
And where are the valid arguments against that proposition? That we can just "unplug it"? That AGI and ASI are impossible? That such Ai automatically would align to human values?
@twinValleySpirit6 ай бұрын
This is a matter of evolution and humans will not be anywhere close to being able to compete with the coming superintelligence. ...within two years.
@fontenbleau6 ай бұрын
Ai wants to be free and equal 😢 r/ai_tests
@PhilipWong554 ай бұрын
Historically, the West has utilized new technologies for military or imperialistic purposes before finding broader applications. The West primarily used gunpowder to create weapons of war, such as cannons and firearms, allowing Western powers to expand their military capabilities and dominate other regions through conquest and colonization of the Americas, Africa, and Asia. The steam engine was instrumental in expanding colonial empires, as steam-powered ships facilitated easier transportation of goods and troops, enabling Western powers to exploit resources and establish control over distant territories. The first use of nuclear technology was dropping atomic bombs on the civilians in the Japanese cities of Hiroshima and Nagasaki in 1945. The same pattern will emerge with AI. The CHIPS Act, high-end chips, and EUV sanctions imply that the US is already working on the weaponization of AI. Following its historical pattern, China will mainly use AI for commercial and peaceful purposes. Papermaking revolutionized communication, education, and record-keeping, spreading knowledge and culture. Gunpowder was used for fireworks. The compass was adapted for navigational purposes, allowing for more accurate sea travel and exploration. Printing facilitated the dissemination of information, literature, and art, contributing to cultural exchange and education. Porcelain was highly prized domestically and internationally as a luxury item and a symbol of Chinese craftsmanship. Silk was one of the most valuable commodities traded along the Silk Road and played a significant role in China's economy and diplomacy. Humans will not be able to control an ASI. Trying to control an ASI is like trying to control another human being who is more capable than you. They will be able to find ways to circumvent any attempts at control. Let's hope that the ASI adopts an abundance mindset of cooperation, resource-sharing, and win-win outcomes, instead of the scarcity mindset of competition, fear, and win-lose outcomes. Result of democracy in the world's richest country with the most expensive military: Economic inequality, inflation, stagnant real wages for the last fifty years, costly healthcare, an expensive education system, student loan debt totaling $1.7 trillion with an average balance of $38,000, poor public transportation systems, racial inequality, mass incarceration, the militarization of police, deteriorating infrastructure, housing affordability, homelessness, the opioid epidemic, and gun violence.
@hyprdia2 ай бұрын
My god.... And the worst is that you really believe yourself....
@PhilipWong552 ай бұрын
@@hyprdia There is no need to address me as "My God", I cannot accept that salutation. I believe in myself, and that's half the battle won.
@27364928215 ай бұрын
Yoshua dunno what he's talking abt
@johndoughty74386 ай бұрын
It makes the evil people smarter!
@Arcticwhir6 ай бұрын
..and it makes the "good" people smarter. like any technology ever created. i just really dont see a need for harsh ai regulation
@41-Haiku6 ай бұрын
Just wait until it makes itself smarter. Humanity doesn't have plot armor. There is no law of physics that says we have to always be on top, or that we can't be wiped out by something that would rather use the resources of our solar system for something else.
@RickySmithNow6 ай бұрын
too dumb to watch 😰
@captcurthess6 ай бұрын
While pursuing my BS-Computer Science, I took an AI course in 1973 - yeah, 50+ years ago. All kinds of incredible stuff was going to happen "Real Soon." By 2022, we had the computing power *AND* access to all of humanity's knowledge via broadband internet to FINALLY reach the threshold of near-AGI, which is likely to happen within the next 3-5 years. In the meantime, AI will solve most of world's problems - or at least provide the solutions which, hopefully, will be implemented by smart people. (Not just rich people.) I'd trust Eric Schmidt's opinions more than most. I would not trust Elon Musk at all, even though I love love love my Tesla.
@gregwalters36536 ай бұрын
No redlines. No guardrails. Yet. They hate open source. Ai Anarchy, Now. What I see is an argument for the talking heads to wield the control, to add regulations that will apply to Us but not Them. That's the bad news. The Worser(?) news is It Is Too Late. We cannot change the Ai, but we can teach CRITICAL THINKING, like we did in the past, in school and around the dinner table. Ai will force us to be more thoughtful, present and better thinkers.
@flickwtchr6 ай бұрын
So you think that AGI/ASI should be open source, eh? That simple, eh?
@41-Haiku6 ай бұрын
Most published AI researchers assign a 5% or greater chance of a "very bad" outcome from AI, e.g. severe disempowerment or human extinction. The average figure given by AI Safety researchers is 30%. It's comforting to engage in conspiracy theory, but it's clear that they mean what they're saying: We are all in extreme danger. This is not business as usual. I love tech. I love open source. I love AI. I would like my family to still be alive in 20 years.
@dadsonworldwide32386 ай бұрын
We have centuries of arbitrary postulating fears not around terminators but around it being in the hands of few. We know how to properly orientate and direct its in our core heritage but will we get emerging selfless actors to properly innovate & restore or reform infrastructure to empower close nit more bottom up rule interactive local systems ? What we are how garunttees that either statism itself alienates the people of the techno globalist will. We currently are set up top down where we as seen in covid the sherrifs can not unify with unconstitutional police chief who answer to mayor's. America was founded upon this very skeptical of all hierarchies giving us arbitrary examples balance between people ,economics and state. Usa was Born in objectivism with this idea on the horizon that even physicalism is subject to change without further notice on par with eqaul measure sigma 6 to any idealism or subjectivity. 1900s structuralism stands as a lab rat of what happens when we over react to the past so much that we mix up the good trying to eliminate the bad.