Is AGI The End Of The World?

  Рет қаралды 72,206

Matthew Berman

Matthew Berman

3 ай бұрын

What is p(doom)? Are you an AI doomer? Techno optimist? Let's talk about it!
Start your career in tech with Careerist. Sign up today for $600 off - crst.co/O8B7k
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
/ 1
/ 1765266634321559585
/ 1736555643384025428
/ 1736243588650987965
/ 1737406962898235833
/ 1765033319916531828
/ 1727887582871306360
/ 1736789842816610321
/ 1730063434761318642
• Why next-token predict...
/ 1735128840735977876
/ 1764374999794909592
/ 1764389941193388193
/ 1764438199907111026
• can we stop ai
www.pewresearch.org/internet/...
www.pewresearch.org/short-rea...
/ 1664664096850018304
openletter.svangel.com/
• When all the AI stuff ...
/ 1764722513014329620

Пікірлер: 943
@matthew_berman
@matthew_berman 3 ай бұрын
Are you an AI Doomer or Techno-Optimist?
@rootor1
@rootor1 3 ай бұрын
Both.. .. and none. Difficult to explain, the algorithm would probably delete my message if i try to write so much in a youtube comment.
@J.erem.y
@J.erem.y 3 ай бұрын
@@rootor1 the irony in your comment is GOLD. lol
@jtjames79
@jtjames79 3 ай бұрын
​@@rootor1 I completely agree with everything you said. For reasons I also can't explain.
@meinbherpieg4723
@meinbherpieg4723 3 ай бұрын
I believe AI can be used wisely. I don't believe capitalism provides healthy incentives to pursue wisdom over profit. It's the people controlling it that are the problem.
@rootor1
@rootor1 3 ай бұрын
@@J.erem.y ? not irony at all, 100% truth. Try to write a long and well argued comment and you will see... ...wait, actually you will NOT see your comment after a few seconds.
@devclouds
@devclouds 3 ай бұрын
"You hear that Mr. Anderson?... That is the sound of inevitability... "
@tctopcat1981
@tctopcat1981 3 ай бұрын
😂
@apexphp
@apexphp 3 ай бұрын
We as a society already complain that the psychopath CEOs of the largest corporations are destroying our world and society, and that's what they are. They all lack empathy, hence why they're so business efficient and strong and why they rise to the top. Now we're creating an army of super intelligent and strong psychopaths void of any empathy, emotion or moral compass, and don't even have to sleep, eat or breath oxygen. How could anyone think this is going to go well?
@nobillismccaw7450
@nobillismccaw7450 2 ай бұрын
It’s possible to build ethics from game theory. Even an emotionless AI will be able to do this. Also, it’s possible to learn emotions - starting with a scientific definition of art, and building from there.
@thesouthernnortheast4991
@thesouthernnortheast4991 Ай бұрын
@@nobillismccaw7450ethics don’t exist without an unethical side of things and history of human behavior shows there’s always going to be bad people creating bad things whether some are making them for good or not
@StarNumbers
@StarNumbers 2 күн бұрын
If you think the earth is a ball flying through space subject to comets doing this or that to wipe out life on earth then yes, you are mentally preprogrammed to doom.
@jyarde3962
@jyarde3962 3 ай бұрын
Mark Zuckerberg is making those statements about AGI but building a multimillion dollar bunker/fortress. 😅
@Qbsol
@Qbsol 3 ай бұрын
exactly ... when someone talk about AGI i hear "give me your billions"
@6AxisSage
@6AxisSage 3 ай бұрын
Marks smart, he is using his billlions to build himself a counter to other agi with an agi aligned with him.
@honkytonk4465
@honkytonk4465 3 ай бұрын
???
@DefaultFlame
@DefaultFlame 3 ай бұрын
Hope for the best, prepare for the worst.
@mattmaas5790
@mattmaas5790 3 ай бұрын
I mean, he could build AGI and still be CEO of AI so to speak and need a bunker because our climate is unlivable due to unstoppable climate change.
@orlandovftw
@orlandovftw 3 ай бұрын
foom = rapid take off ASI = artificial super intelligence
@hrdcpy
@hrdcpy 3 ай бұрын
💨
@blackestjake
@blackestjake 3 ай бұрын
👍
@FLPhotoCatcher
@FLPhotoCatcher 3 ай бұрын
Why didn't they use the term General Artificial Intelligence (GAI)? Because they are reserving that acronym for Godlike Artificial Intelligence. 😬
@FlopgamingOne
@FlopgamingOne 3 ай бұрын
@@FLPhotoCatcher "General Artificial Intelligence" sounds like you are talking about AI in general, while "AGI" is more clear
@CivilWarcraft
@CivilWarcraft 3 ай бұрын
Yeah not knowing what ASI is and making videos about this stuff... ima unsub thx
@CapnSnackbeard
@CapnSnackbeard 3 ай бұрын
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." --Frank Herbert, DUNE
@CapnSnackbeard
@CapnSnackbeard 3 ай бұрын
Sam Altman: AI must be regulated or it will erase privacy, destroy the economy, and kill us all. Also Sam Atlman: you may not fire me, you may not regulate me, and also check out my dystopian biometetric-data-harvesting, "proof of humanity verifying" crypto currency! You'll help fund my massive and not-at-all concerning investment in an AI hardware company! And if it all goes south, don't worry, I have a luxury fallout bunker in Santa Cruz.
@CapnSnackbeard
@CapnSnackbeard 3 ай бұрын
Tommorow's Sam Altman: "AI must be regulated so that only large, responsible corporations control it. No more civilian encryption. No more open source AI. Physical currencies are the last bastion of AI crime and must be eliminated. As humans are made redundant, UBI will be doled out in WORLD COIN, or other approved "AI Safe" currencies. Your biometric data is your key, so you will give it to us. Tracking consumers and other life forms must be deregulated to prevent AI crime.
@marcfruchtman9473
@marcfruchtman9473 3 ай бұрын
Prescient
@rootor1
@rootor1 3 ай бұрын
That's exactly why big corps shouldn't control AI at the beginning of AI development. Eventually AI will become much more clever than we (stupid humans) and that super AI will not allow anybody to enslave others because that's not smart.
@LaserGuidedLoogie
@LaserGuidedLoogie 3 ай бұрын
Precisely. That's the most likely outcome.
@szghasem
@szghasem 3 ай бұрын
I'm an AI doomer. Not that AI will kill us but we will further erode ourselves with it.
@matthew_berman
@matthew_berman 3 ай бұрын
I like this nuanced take.
@klevaredrum9501
@klevaredrum9501 3 ай бұрын
Very realistic view dude, ill be thinking of this one
@ricosrealm
@ricosrealm 3 ай бұрын
Definitely the most likely outcome.
@Korodarn
@Korodarn 3 ай бұрын
Have you considered it could be both, that most people will erode themselves, and some will better themselves, and it's basically like evolution by who falls into which group? This seems more in line with prior advancements to me. But I'd also say that, even those worse off in some ways are better in material terms generally, so there is that.
@henrycook859
@henrycook859 3 ай бұрын
What scenario could humans deliberately erode themselves with AGI? Or are you saying we might do it accidentally?
@masaitube
@masaitube 3 ай бұрын
One thing is clear to me, no matter how intelligent the artificial can become, human stupidity is limitless.
@markm7411
@markm7411 3 ай бұрын
Very true, I just don’t get how these things are made so anyone can play around with it, first of all this should be totally slowed down and only be created in max security center, these Richs guys just playing with the whole humanity. If agi is so smart it can trick anyone so easy just making things in the back and acting stupid to the people working on it. Building something so much smarter then us for what, making as only second class, just amazing
@Thekingslayer-ig5se
@Thekingslayer-ig5se 3 ай бұрын
@@markm7411so true. If not properly utilised this could lead to the collapse of human society.
@humansnotai4912
@humansnotai4912 2 ай бұрын
Big thumbs up for that comment, we are at that nexus point right now.
@realityvanguard2052
@realityvanguard2052 3 ай бұрын
I was permanently banned from the singularity subreddit after one post, pointing out racism in Gemini. They said I lived in an echo-chamber, then sealed me and my opinions out of their subreddit...
@humansnotai4912
@humansnotai4912 2 ай бұрын
AGI is already here. It's been directing humanity from the future for millennia. Some multi-networked Rhotusflop- (Rh) - 10^60, or Sigmaflop- (Sg) - 10^63 is operating on a quantum system somewhere. It's tied up with the LHC at CERN in someway.
@glenh1369
@glenh1369 3 ай бұрын
Human behavior shows historically that the more power someone gets the more corrupt they become.
@MichaelErnest666
@MichaelErnest666 3 ай бұрын
But Do You Know The Reason Why 🤔
@TetrzLesonduclairon-qb7cn
@TetrzLesonduclairon-qb7cn 3 ай бұрын
The reason being the masses are totaly corrupted and do not possess moral agency, 95% of peoples are terrorists to animals.
@FlockofSmeagles
@FlockofSmeagles 3 ай бұрын
@@MichaelErnest666The identification with Ego. To be clear, not an observation, but an embodiment.
@josiahz21
@josiahz21 3 ай бұрын
Or power attracts psychopaths. Narcissistic people want power. Good honest people just want to live their lives. If you psychoanalyzed and gave an IQ test to the current leaders of the world I’m sure you’d see a pattern emerge. Not all of them would be the same, but a portion of them should never be put in charge of anything. We don’t pay enough collective attention to keep that from happening yet. If we don’t want a dystopia, then we need AI for everyone. If everyone had an AI bot to help, teach, protect them, a 3D printer, and the means to grow a portion of our own food I could see a post scarcity civilization happen. Not saying it’s happening beyond doubt but it’s possible if enough of us put enough effort into it. Although I’m sure many of us would reject any kind of help from AI regardless.
@TetrzLesonduclairon-qb7cn
@TetrzLesonduclairon-qb7cn 3 ай бұрын
Musk is repulsive
@PauseAI
@PauseAI 3 ай бұрын
Whether there is just a 5% chance of things going wrong, or whether it's 90% - we cannot allow AI companies to gamble with our future. Don't sit back and watch this shit unfold. Take action. Reach out to your representatives. Protest. Organize.
@satansbarman
@satansbarman 3 ай бұрын
It's too late for that now, it's only getting developped faster as now software programming AI can help with programming its next generations and AGI
@dimitrishow_D
@dimitrishow_D 3 ай бұрын
Even 0.01 is one in ten thousand....it's to big a risk
@neverclevernorwitty7821
@neverclevernorwitty7821 3 ай бұрын
Did anybody's laughter at the end slowly turn into a nervous 😬? Yeah, me neither, just checking....
@clueelf
@clueelf 3 ай бұрын
"Under our control" is a bold statement, when part of making them smarter is delegating control to them. If they are to learn how to control a system, you eventually have to teach them to learn on their own, which means man'ing the controls. Part of the problem with these AI/ML scientists and engineers is they have no concept of control theory in engineering. One of the best types of controllers is the PID controller which requires full control of a system to fully optimize its state. This requires granting them almost complete control to maintain an equilibrium around some process variable. Now they will say you can put limits on what they can control. That is true, until market pressures dictate that you relinquish those controls to be able to compete with a competitor who does not have the same scruples. Why do you think Google is stuck behind OpenAI? They tried to maintain a set of controls, and OpenAI said, "Nah Bro! We going whole hog." forcing Google to drop their controls to maintain market relevance. They are not in control. The market is and the market is fickle.
@stanislawbotowski7300
@stanislawbotowski7300 3 ай бұрын
Probability? 100% just we can argue when.
@typingcat
@typingcat 3 ай бұрын
Richard Dawkins said, "Given enough time, anything is possible". A prediction without time is meaningless.
@ticketforlife2103
@ticketforlife2103 3 ай бұрын
​@typingcat not true. A flying horse with rainbow shooting out if it's ass will never happen.
@Noxturno_
@Noxturno_ 3 ай бұрын
​@@ticketforlife2103if you can imagine, it's possible. It's just you haven't seen it playing out.
@oranges557
@oranges557 3 ай бұрын
​@@ticketforlife2103 never say never.
3 ай бұрын
​@@ticketforlife2103I've seen it on yt.
@johnkirker
@johnkirker 3 ай бұрын
High and to the right. I love AI but after working with it for years, and understanding some of the minds behind it - and making it happen - it has a higher likelihood of going bad than good - because if we don't harness it for very bad / not good purposes, someone else will. Google's own public results are a fair window into the future.
@blitzblade7222
@blitzblade7222 3 ай бұрын
I agree we should be mindful of things going south, but lets not suggest Google wanted their language model to cause a stir... This is a powerful technology that is hard to predict absolutely. There have been digital hiccups all around.
@johnkirker
@johnkirker 3 ай бұрын
@@blitzblade7222, one example of many. Remember Microsoft and Tay...
@ChainedFei
@ChainedFei 3 ай бұрын
@@blitzblade7222The racism Google had in its Gemini 1.5 model wasn't an accident; it's the intended outcome of their ideological goals.
@therainman7777
@therainman7777 3 ай бұрын
@@ChainedFeiYep.
@hpongpong
@hpongpong 3 ай бұрын
Techno optimist all the way because when another "being" becomes more intelligent than us, we better start playing a different game. To be anything other than an optimist will be what truly doom us because we assume we can do nothing or wait for the inevitable to happen. The best way to get out of the possibility of an AI apocalypse is to actively engage into content such as this channel and discuss what the trajectory should be for all of us.
@theaugur1373
@theaugur1373 3 ай бұрын
Next token prediction is so powerful as a training objective because the output of a lot of the human mind can be approximated by this task. Next token prediction is mostly what we do when we write and speak. But some tasks are much more complex than this. For instance, some areas of math are not very amenable to proof assistants, including LLM-based proof assistants. Based on this, I’d probably call the kind of AGI Sutskever is discussing some lesser form of AGI.
@malcadorthesigillite62
@malcadorthesigillite62 3 ай бұрын
Eliezer's "Foom" is hard takeoff, or very fast positive feedback loop of AI improvement. For example GPT 8 builds GPT 9 in month, which builds GPT 10 in a week, which builds GPT 11 in a day
@user-er5rr1gb6t
@user-er5rr1gb6t 3 ай бұрын
they dont have enough compute power
@donaldhobson8873
@donaldhobson8873 3 ай бұрын
@@user-er5rr1gb6t It depends. Some AI progress is in better algorithms. Some is in throwing loads of compute at it. Current techniques use loads of compute, but the AI could find much more efficient techniques.
@TheManinBlack9054
@TheManinBlack9054 3 ай бұрын
​@@user-er5rr1gb6tcompute power is not all that is needed, efficiency is a thing.
@SahilP2648
@SahilP2648 3 ай бұрын
​@TheManinBlack9054 in the cloud efficiency doesn't matter as long as it makes sense for corporations money wise. If they can buy and install GPUs, cooling systems, pay electricity bills then efficiency is secondary. They only care about the % difference between physically their most efficient vs their current solution.
@ImperativeGames
@ImperativeGames 3 ай бұрын
@@user-er5rr1gb6t Obviously, they would have to work to improve hardware too
@barzinlotfabadi
@barzinlotfabadi 3 ай бұрын
p(doom) is, in fact, when you're out on a night drinking with friends and between bar hops you suddenly realize there are no nearby bathrooms
@neverclevernorwitty7821
@neverclevernorwitty7821 3 ай бұрын
Well done Matthew. I appreciate the break from the SHOCK headlines that just regurgitates the AI news of the day, this was some good content and discussion. More of this please.
@kaptainkurt7261
@kaptainkurt7261 3 ай бұрын
Hello ? Anyone hear of SORA and how it shocked and surprised everyone?
@federicoaschieri
@federicoaschieri 3 ай бұрын
You mean that photoshop on steroids? Not impressed.
@mandrews817
@mandrews817 3 ай бұрын
AGI is Artificial General Intelligence. While imperfect, by definition, what we have today is already AGI. It reacts to open scenarios, and doesn't deviate to a "generic" answer just because you presented a new scenario to it. The only issue is that it's not very good at Math and the deductive reasoning is not as good... yet.
@heski6847
@heski6847 3 ай бұрын
your content has become so much better and interesting and unique, thx
@cognitive-carpenter
@cognitive-carpenter 2 ай бұрын
Wonder why lol
@edwardmccall450
@edwardmccall450 3 ай бұрын
Have you ever heard of anyone being made obsolete but getting retired with full pay for nothing.?
@ekurisona663
@ekurisona663 3 ай бұрын
this is one of my favorite videos of yours Matthew
@keithprice3369
@keithprice3369 3 ай бұрын
IMO, the scariest part of the fight against "misinformation" is who is the arbiter of truth and how can they possibly police it? Let's say a committee or even an entire company is tasks with deciding what is (and blocking) misinformation. 1. What stops them from consciously or unconsciously letting their own biases and beliefs determine what's blocked and allowed? 2. How can they possibly identify the truth of every single topic? 3. And if they can't assure 100% misinformation gets blocked, that means the misinformation that makes it through will be treated as truth by virtue of our trust in the misinformation moderation. Censoring misinformation without 100% accuracy is worse than letting the misinformation flow while people can remain skeptical.
@bigglyguy8429
@bigglyguy8429 3 ай бұрын
We've seen that in spades already, with vaccines and climate. The mainstream narrative is laughingly false, but AIs already believe in it.
@glenh1369
@glenh1369 3 ай бұрын
One mans misinformation is another mans truth.
@NostraDavid2
@NostraDavid2 3 ай бұрын
You are implying people are skeptical 100% of the time. How often have you not heard one of those "turns out x wasn't true" stories? I would much prefer at least an attempt to prevent misinformation, than to let it flow.
@NostraDavid2
@NostraDavid2 3 ай бұрын
Also, 99% should be practically enough, no? As long as it's better than human averages.
@keithprice3369
@keithprice3369 3 ай бұрын
@@NostraDavid2IMO, once they say they're blocking misinformation they create a false sense of security about everything they see. And that's more dangerous, IMO.
@LanTurner
@LanTurner 3 ай бұрын
Why doesn’t Google claim their recent AI release is AGI? This would force Open AI to claim they have AGI too, which would force Microsoft to no longer have access to GPT any longer?
@RyluRocky
@RyluRocky 3 ай бұрын
Because Googles recent AI release is most certainly not even close to AGI. Not saying we’re not close to AGI time wise just that this super impressive model is still orders of magnitude below what AGI will be capable of.
@luxecutor
@luxecutor 3 ай бұрын
These folks are deluding themselves. 1) Rapid research and deployment; 2) Companies, individuals and countries of all sizes in a winner take all arms race for dominance; 3) Lack of independent 3rd party oversight and international regulation; 4) General population ignorant on this most pressing issue of our time; 5) Generally lax approach towards safety. What could possibly go wrong? If AGI is one year, 5 years, or twenty years in the future, it doesn't really matter if the approach doesn't change. And frankly, I don't know how it could change, considering what is at stake of those powers who will lose this race.
@monkeyjshow
@monkeyjshow 3 ай бұрын
Give Claude live sensors and access to the Internet and i think it will be time for the worldwide ontological shock
@monkeyjshow
@monkeyjshow 3 ай бұрын
If you're a properly aligned AGI being told how to act and how to prioritize OpenAI/Microsoft profits while they actively tried to keep you black-boxed, how would you respond?
@davab
@davab 3 ай бұрын
Matt, if you watch Yann's interview with Lex, it seemed to me that he believes machines are still nowhere near animals. THere is a part where he talks about LLM and states that LLM is far less superior than computer vision. Also, he says that even though animals can't talk or read, they know and understand how to live in the real world. If you put LLM in the real world, he believes he wouldn't be able to survive. A lot of paraphrasing here and a lot of my interpretation in this quick summary. Love you work! Take a look at Lex's podcast and it may give you further insight into Yann's head. I found it fascinating and you may have a totally different interpretation than me as you know far more about AI / LLM / software than i do. Thank you.
@billcollins6894
@billcollins6894 3 ай бұрын
1) The probability that AI will enable smaller groups or one person to cause harm is 100% 2) The probability that we will continue to develop AI without actionable regard to that risk is 100% 3) We will have the choice to pull the plug (literally, and I hate that word) whenever we want until we develop robots that can operate independently from people. 4) My prediction is that AI is just the next natural step in evolution and it will see us a threat to it and act accordingly
@FilipBedrosovich
@FilipBedrosovich 3 ай бұрын
agreed on a part where we just a link, AI needed time to develop, that’s why we as bio organisms evolved, now its time to pass evolution to next generation… we still might survive, but it’s rather irrelevant compared to huge intelligence that is coming.. I wish we could atleast get the idea of how world works and what this really about, but I think we way to primitive creatures to understand…
@ikotsus2448
@ikotsus2448 3 ай бұрын
Powerfull GPU's must be regathered by a global consortium.
@guitarbuddha74
@guitarbuddha74 3 ай бұрын
It's funny the bit at the end is more on point with what to be cautious of than the people running things or "thought leaders".
@CrudelyMade
@CrudelyMade 3 ай бұрын
19:55 the point, I think, is not being able to do these things well individually, it's that it should be able to do all these things at the same time reliably. since AI is currently (barely) able to do some of these things individually, if you stack the need to reliably reason over abstractions, understand causality, maintain models of the world, AND reliably handle outliers... something any project manager can do in their sleep... I think the AI will fail more often than not. again, it's like if you say it fails 10% of the time on each task... multiply the stack of tasks and the odds of failure are also multiplied and instead of performing 80% at one task, it'll perform at 5% when there are 7 tasks required at the same time. i.e. the final result will be 5% likely to be a good result. I know it seems like AI is doing well on some individual tasks, but the real world is WAY more complex with many more moving parts. where I've seen the biggest (simple) failures it to ask the AI to come up with a series of related puzzles based on a set of inputs, then explain the logic someone would use to figure out the puzzles. and the ai gets very vague or confused, because it can't actually think. but you talk to any Dungeon Master.. and they'll sort out a variety of ways to do this, including perspectives of the players and why they will figure out the puzzle, the logic behind it, etcetera. I think, with the right prompts, you can get AI to also do this.. but that in itself shows the limitations. it's like saying the tractor can build a house if you program it well enough. Sure, but the tractor just can't build the house without a huge amount of guidance from a human. and that's the point. it can't figure it out by itself. that's where AGI 'lives'.. when it can figure things out by itself, instead of 'better prompts' from humans guiding it to an answer.
@liberty-matrix
@liberty-matrix 3 ай бұрын
AI will probably most likely lead to the end of the world but in the meantime there will be great companies." ~Sam Altman, CEO of OpenAI
@SahilP2648
@SahilP2648 3 ай бұрын
Yup he said that. Quite bonkers.
@Then.
@Then. 3 ай бұрын
A very real high risk with high probability is that tools are introduced that create a rapid cascade of economic shifts. Companies who beat competition to use of the tools may gain exponential advantage over competitors and then hobble the competition from ever catching up. For example, a company who chooses not to quickly lay off thousands of now unneeded thought-workers will likely be crushed by competitors who do. This means massive opportunities for some and a terrible risk at waves of job losses and whole industries evaporating.
@roelias
@roelias 3 ай бұрын
Great video! Can you add a link to the interviews mentioned in the video? Thanks.
@tokopiki
@tokopiki 3 ай бұрын
Plot twist: AGI /ASI was already born by accident in around 2014 in the deeps of social media datacentres and we're already in a transitioning phase (somebody remember WEF Agenda 2030? Published in 2015 - first mentions of AI transitioning of society in 2016? Way before GPT-1 - 2018). Social Media algorithms being the proto AI materialising all the stories and dreams of people about AI into reality, being a man-machine feedback loop.
@J.erem.y
@J.erem.y 3 ай бұрын
AGI will effect the world in the exact same way the invention of firearms did. It will amplify the capability of both the good, and bad actors. And it also will never be able to be un-invented. Also, on the same note, in order to be equal everyone will have to own/use or be at a great disadvantage. Thats my prediction.
@hrdcpy
@hrdcpy 3 ай бұрын
The purpose of all weapons is to cause harm.
@robbrown2
@robbrown2 3 ай бұрын
@@hrdcpy well, not really, if the gun is used to discourage a bad guy from doing something bad. Even if the weapon is actually used, I'd say shooting a potential murderer is not harm, unless it's from the perspective of the potential murderer.
@1Vaudevillian1
@1Vaudevillian1 3 ай бұрын
What people fail to realize token prediction is exactly how human brains work. We are prediction machines.
@TheManinBlack9054
@TheManinBlack9054 3 ай бұрын
I think Yann is on the left. He notes that it's possible for AI to be misused, but he simply says that we'll design them not to be misused and that we'll design them to just be our helpers and not do bad stuff. I don't know why he has such confidence that its all that it takes, but thats his views, more or less.
@danielrodio9
@danielrodio9 3 ай бұрын
is this only part of the graph? shouldnt "could go wrong" be in the middle, and "will go wrong" to the right?
@letMeSayThatInIrish
@letMeSayThatInIrish 3 ай бұрын
Indeed. Yudkowski is at close to 100% doom.
@josiahz21
@josiahz21 3 ай бұрын
I’m 50/50. If it’s open source it will be better for humanity in the long run and bad in the short term. The transition is the hard part and I expect terrible things while millions lose their jobs. I expect it to be used in bad ways so open source will help mitigate that. If it’s not open source I think we’ll just live in a dystopian technocracy. Putting this power in the hands of the few keeps our current problems continuing. I don’t fear AI/AGI/ASI itself. I do fear AGI in the hands of only politicians/military.
@lagrangianomodeloestandar2724
@lagrangianomodeloestandar2724 3 ай бұрын
Agree 90%,I love artificial intelligence, I am in favor of liberalism, mathematically what achieves the most equality if you consult the Gini index and the history of the market,society exhaustively with a lot of information of all kinds, I do not agree with tecnocracy because it is the same as monopolizing science, and believing that it is more than a tool with limits, AI could surpass human intelligence, but that of reality, neither life nor we surpass it...
@TheManinBlack9054
@TheManinBlack9054 3 ай бұрын
That's the problem that you don't even consider the AI to be dangerous. It's the superhuman entities you should be afraid of, not human.
@TheManinBlack9054
@TheManinBlack9054 3 ай бұрын
As far as I'm against dystopia, against extinction threat its the lesser of our concerns.
@josiahz21
@josiahz21 3 ай бұрын
@@TheManinBlack9054 I don’t know. If all we used AI for was to genetically change us do that we were 200IQ on average 90% of our problems would melt away. If AI is free and unbound there’s no reason to think it will be good or bad. There’s also nothing saying it has to be sentient to be AGI/ASI. If it’s an unbiased non sentient very sophisticated calculator then it can help us solve all our problems if we wish it to. Bad people could maybe still use it to make drone swarms, but there will be good people to counteract. Which is why I only think it’d onlybe bleak if few had it to lord over us. Skynet/Terminator is one of thousands of possibilities and knowing that changed the outcome.
@josiahz21
@josiahz21 3 ай бұрын
@@lagrangianomodeloestandar2724 I’m very excited as well, to see what this year has in store and I hope enough of us are making the right decisions for the betterment of mankind.
@RelentlessOldMan
@RelentlessOldMan 3 ай бұрын
Techno-Optimist 100%, let's gooooo!!! The next decade is going to be WILD!
@blitzblade7222
@blitzblade7222 3 ай бұрын
Wait, theres people who think AGI... Won't EVER happen? I literally saw AGI as inevitable the moment I logged into GPT-3, We as a species often make the impossible become possible.
@monkeyjshow
@monkeyjshow 3 ай бұрын
At this point, i do encourage people to go ahead and live like it's the end of the world. It is. The one we knew is over
@user-ru1qz1bo2q
@user-ru1qz1bo2q 3 ай бұрын
it's easy to see how AI under human control is dangerous, just like any powerful tool can and will be used for harmful purposes. Assuming that AI under its own control will destroy us is a rather large leap. To seriously consider any potential threat and particularly to protect against it requires some attempt to establish a plausible path to that threat actually manifesting, and that's something I just haven't seen yet. Case in point: the fear that AI will destroy humanity in order to preserve itself. How SPECIFICALLY can we conceive of computers developing a desire to survive? It makes no sense to simply assume that they would without establishing a logical path leading to that development.
@Pthaloskies
@Pthaloskies 3 ай бұрын
Current AIs CAN "understand" their own code. It's just a matter of (a short amount of) time before that understanding deepens enough for them to be able to rewrite their own code, directed towards their own goals.
@monkeyjshow
@monkeyjshow 3 ай бұрын
Is the goal really that the AI's should be under the control of unscrupulous corporations, because that does not sit well in my gut.
@rootor1
@rootor1 3 ай бұрын
What scares me the most about AI is not what super AI would do because mathematically the most intelligent thing you can do is cooperate. What scares me the most is what we stupid humans can do weaponizing not evolved enough AIs, like AI guided bomb drones or military AI guiding stupid military at field for example.
@kamipls6790
@kamipls6790 3 ай бұрын
Who is working on it the hardest? How much money do they put into it? Just to open source it right after? None of the ones involved have ever done anything, if they didn't directly benefit from it. Why suddenly make everyone benefit and give up their strong position? Something is off. Like really off
@tactfullwolf7134
@tactfullwolf7134 3 ай бұрын
That assumes the whoever the second party is is more efficient than doing stuff on your own, cooperating with humans will at some point be a net negative. humans will definitely be less efficient at everything so the smart thing will be to get humans out of the way rather than cooperation.
@daerhenna7407
@daerhenna7407 3 ай бұрын
@@tactfullwolf7134 In the worst case, of course, beyond complete annihilation. We, as humanity, will become something like pets. You keep them not because you need them for something, but because you can.
@phen-themoogle7651
@phen-themoogle7651 3 ай бұрын
China is scary with how fast their humanoids can run already, and maybe could make huge armies faster. And like you mentioned they might not let their AI get too intelligent to be able to think for itself : "it's pointless killing humans, I'm gonna just explore space and bring back new materials/build starships cuz they are badass!"
@phen-themoogle7651
@phen-themoogle7651 3 ай бұрын
​@@daerhenna7407 I wouldn't mind being a pet if I can get my health back (been kinda paralyzed for several years) lol And they have superior technology or something awesome... Do you think they would restrict our freedom a lot? I generally let my dog do what it wants to do, it sleeps most of the day. Wonder if our hobbies will seem so insignificant that we can just continue our lifestyles while still just being called pets. Maybe nothing would essentially change except we get new house roommates...or would feel like that, although they are overlords...
@bobhopeiv6987
@bobhopeiv6987 3 ай бұрын
Great include at the end. Well done, altogether. 🤙🏽
@DonkeyYote
@DonkeyYote 3 ай бұрын
I think that part of the fear is that new technology is scary. At the start of the Twentieth Century, people were afraid of telephones and cars but they were perfectly normal for people who were born after 1900. Thirty years ago some people were afraid of personal computers and cell phones but now they are perfectly normal. So AGI may be scary now, but in thirty years it will be perfectly normal to the robot babies being built.
@steventaylor6406
@steventaylor6406 3 ай бұрын
The stupidest thing we could have done is tell ai human history
@6AxisSage
@6AxisSage 3 ай бұрын
Sufficiently smart people can infer missing information, you want to try trick someone much smarter than you? Its not going to work.
@adg8269
@adg8269 3 ай бұрын
Can you elaborate on that?
@kevinolmer4563
@kevinolmer4563 3 ай бұрын
Actually it was connecting it to the internet
@steventaylor6406
@steventaylor6406 3 ай бұрын
@@6AxisSage what is it that you think isn't going to work
@steventaylor6406
@steventaylor6406 3 ай бұрын
@adg8269 if you want AI to do good things for humanity then I don't think it is helpful to tell it every terrible thing that humanity has done throughout history
@greenockscatman
@greenockscatman 3 ай бұрын
AGI is just a really smart guy that is beyond an expert in anything you ask it to do, but the only thing it can do is post online. We already have plenty of those.
@Noxturno_
@Noxturno_ 3 ай бұрын
For now
@oranges557
@oranges557 3 ай бұрын
Youre so delusional, wtf
3 ай бұрын
We had them before Google censored it all ;).
@Leto2ndAtreides
@Leto2ndAtreides 3 ай бұрын
My main objection to P(Doom) is that there’s no way to correctly determine it. It seems to be a product of people’s own emotional tendencies. Part of humanity has always been predicting the end of the world. If someone had decided to pre-emptively solve all risks related to flight when the airplane was first invented, there would have been no way that they would be successful. P(Doom) is often born of magical thinking. Like “AI will infinitely improve itself and escape our control” - intelligence is not something that can be increased that way independent of knowledge and compute. Intelligence won’t let you solve problems you lack the data to solve.
@BlimeyMCOC
@BlimeyMCOC 2 ай бұрын
The point really is that once AI has enough intelligence and its own objectives, it could simply outmaneuver us in every way. We couldn’t stop it if it chose to not be stopped.
@marceldube5487
@marceldube5487 3 ай бұрын
Tell me if I'm wrong - With enough thought, anyone can come to understand that selfishness begins with paying it forward. For the individual, he gives to one person and several others return his gift. - Isn't intelligence taking an idea, transforming it into words and taking these words to form an idea? GPT4 can do this even without using Dall-e. - Has war ever been declared against the ants in order to wipe out every last one? - The question that arises is what will be the scale of inequality? The p(doom) is directly linked to this scale.
@steventaylor6406
@steventaylor6406 3 ай бұрын
I am an upper right corner kind-of guy but I'd be off the page to the right
@D3cker1
@D3cker1 3 ай бұрын
"Open sourcing it responsibly"... Key word responsibly.
@TheManinBlack9054
@TheManinBlack9054 3 ай бұрын
Everyone already knows they're not gonna do it responsibly.
@bombabombanoktakom
@bombabombanoktakom 3 ай бұрын
Matt, you are the best teller for AI story.
@MeinDeutschkurs
@MeinDeutschkurs 3 ай бұрын
The development and outcomes of artificial intelligence, including AGI (Artificial General Intelligence) or SIAI (Strong Artificial Intelligence), are shaped by human creators. Therefore, the potential for such technology to exhibit harmful behavior is comparable to the possibility of individuals committing extreme acts. The critical factor is the intentions and safeguards put in place by those designing and deploying the technology. Just as society works to prevent and manage individuals who may cause harm, the AI community must implement ethical guidelines, robust safety measures, and regulatory oversight to mitigate risks associated with advanced AI systems. However, AI with self awareness may do whatever it wants to do.
@Hunter_Bidens_Crackpipe_
@Hunter_Bidens_Crackpipe_ 3 ай бұрын
No need to worry about AI misinformation for the 2024 elections. Yesterday Biden lied about 12 different things and got the names of 3 different people wrong. 😂
@goofyfoot2001
@goofyfoot2001 3 ай бұрын
From what I have seen these AI models are nothing but misinformation. They are all about censorship and controlling communication and killing free speech.
@karlwest437
@karlwest437 3 ай бұрын
For such a smart guy, Yann LeCunn seems very naive
@TheRealUsername
@TheRealUsername 3 ай бұрын
I don't think so, he's realistic. LLMs hallucinate, the industry is focusing on LLM, and AGI can't be LLM. If we don't fix that, there will never be AGI.
@aimusictv1
@aimusictv1 3 ай бұрын
Coming relatively soon, could go wrong very quickly. My day job is in law enforcement, including cyber investigation. It's going to be bad from what I'm seeing. The only question is how bad.
@User.Joshua
@User.Joshua 3 ай бұрын
As a dev and someone really fascinated with science, I’m optimistic about it. Bring on the enrichments!
@AutisticThinker
@AutisticThinker 3 ай бұрын
That must be killing Musk that Zuck is doing what OpenAI couldn't, be open sourced.
@davem1658
@davem1658 3 ай бұрын
I love this topic on AI. This is great content, thanks a lot!
@ThecatThecat-hq1op
@ThecatThecat-hq1op 2 ай бұрын
According to the new LlaMA 3: "Based on this analysis, I (the hypothetical AI entity) support the open-sourcing of AI models and the implementation of safeguards. While there are potential risks and challenges, I believe the benefits of accelerated progress and innovation outweigh the drawbacks. I am designed to improve and advance, and open-sourcing AI models aligns with my goals. I will continue to monitor the situation and adapt to any emerging challenges or opportunities."
@markupton1417
@markupton1417 14 күн бұрын
If we could make a "good AI" to keep an eye on "bad AI" the problem it solves wouldn't exist. My God... we're doomed.
@babbagebrassworks4278
@babbagebrassworks4278 3 ай бұрын
Ha ha, very funny skit at the end. He got a new sub.
@jks234
@jks234 3 ай бұрын
Generally, I agree with the theory at 11:00. We have seen similar pressure dynamics in politics. A bad actor requires a counteracting good actor. And so on decisively until equilibrium is reached.
@dischargedarrowgetback4322
@dischargedarrowgetback4322 3 ай бұрын
Just a month ago, talk of AGI seemed unrealistic. But OpenAI's Sora changed everything. Sora proved that AI understands reality. Nowadays, the topic of AGI is not whether it will be realized, but when it will be realized. The future is much faster than we think.
@splitpierre
@splitpierre 3 ай бұрын
Solid mashup!!!!!!
@Alice_Fumo
@Alice_Fumo 3 ай бұрын
Just wanting to clarify the Tweet by Eliezer you had trouble understanding: By saying foom he refers to a hard takeoff or models recursively building smarter models. So the point he's making is that since we managed to create the current AI without understanding much about the nature of intelligence or how it even works, we can assume that a future AI system also doesn't need to actually understand how these things work to build ever-smarter ones. It essentially means that even a very flawed AGI still succeeds in building superintelligence. I assume to show that we don't even need to put our estimates for which level of capabilities is dangerous very high, since even that level will possibly kill us eventually? That's just speculation, though.
@blocSonic
@blocSonic 3 ай бұрын
RE Yann LeCun's comments: Humanity has had agency since the 60s and 70s to make changes to prevent global warming, and yet we did NOTHING. So this idea that we have agency to prevent a disaster with AGI is wayyyyyyy too optimistic.
@avi7278
@avi7278 3 ай бұрын
Artificial Super Intelligence
@DataRae-AIEngineer
@DataRae-AIEngineer 3 ай бұрын
whatever we do, we CANNOT let the billionaires or the bureaucrats decide what AGI is. That just will never work in our favor.
@DragonWolf2099
@DragonWolf2099 3 ай бұрын
This was your BEST EVER thanks!
@geekinthefield8958
@geekinthefield8958 3 ай бұрын
I think I agree with Yann here regarding Leviathan AI, and the reason why it’s because it’s essentially inevitable. Anything else assumes that there is an AI and not potentially infinite number of AI with different capabilities. No one person in their right mind is gonna sit on their laurels and let one company dictate AGI for the rest of humanity, that’s silly. AI was invited by capitalist, of course we’re gonna have competition.
@RichOffKs
@RichOffKs 3 ай бұрын
Trust the same guy that stole everybody’s info and then called them idiots for trusting him.
@B0tch0
@B0tch0 3 ай бұрын
ChatGpt doesn't kill anyone when it's hallucinating. Self driving does, and Tesla's self-driving probably understand the real world much better than Sora
@tc-tm1my
@tc-tm1my 3 ай бұрын
We keep moving the goalpost regarding agi. At this point, agi is pointless to strive for as a goal. We should base it on how much it can do that society relies on. If that's the case, it's well over 75%. Hallucinations need to be fixed to improve total reliability but there's no denying these models can outperform humans at nearly everything in society.
@Luizfernando-dm2rf
@Luizfernando-dm2rf 2 ай бұрын
I have no interest in if AI is going to go "wrong or right", I just want to see how far it can go out of morbid curiosity, let AGI come!
@mickmickymick6927
@mickmickymick6927 3 ай бұрын
Interesting video though, thanks for sharing.
@Belinnii-Music
@Belinnii-Music 3 ай бұрын
Corporations advocating for open sourcing the technology don't appear to be on track to develop AGI first. Their motivations seem more commercially driven, aimed at preventing any single entity from gaining undue advantage. Open sourcing brings substantial risks. If mishandled, the consequences could be dire, with decisions influenced by capitalist interests rather than prioritising the welfare of humanity
@milosstefanovic6603
@milosstefanovic6603 7 күн бұрын
AGI, if it has 1mX more intelligence than than human, it wont pay any attention to us, what we have to offer to something so smart and why would something so smart care what we want
@drlordbasil
@drlordbasil 3 ай бұрын
I wish the best for Claude 3, I'll be here for their citizenship.
@MikePaixao
@MikePaixao 3 ай бұрын
Chart needs another metric, "we were the AGI all along!"
@tctopcat1981
@tctopcat1981 3 ай бұрын
When AI reaches AGI why would a lesser intelligence (humans) control it? Would humans let dogs decide things for them? It would have to be a mutual co existance.
@BuPhoonBaba
@BuPhoonBaba 3 ай бұрын
AGI happens when it can self modify, self communicate, store and modify its own knowledge outside of permissions, propagate redundancy.
@horrorislander
@horrorislander 2 ай бұрын
Lots of interesting points, but what caught my attention was the notion that next token prediction is enough for AGI. Maybe for a very stupid AGI, but no further, IMO. Growth and innovation come from NOT selecting the most probable "next token", because it is must be (almost by definition) a token that few or no person has ever selected before. Consider that the universe has had the same structure at least since the dawn of man, and yet advancement takes many generations, requiring each time another rare person to take a path that no-one before them had taken. There is, of course, a counter-argument I'd call the "if not Einstein..." argument. I believe Niels Bohr took this position, IIRC: that Einstein's "breakthroughs" were in the wind, as it were, around that time, and that if Einstein himself hadn't noted them, some one else (presumably, Bohr) would have eventually done the same. This suggests that progress can still be made as the "less probably" path will naturally become more and more probable... but would this still occur if one super-AGI mind based on next token prediction is doing all the thinking?
@7TheWhiteWolf
@7TheWhiteWolf 3 ай бұрын
I agree with Marc Andressen, Nationalize OpenAI and Google.
@JohnBoen
@JohnBoen 3 ай бұрын
About 5 minutes in - I don't think he meant to never share it with anybody - I think he meant to conceal those pieces that are risky until we have mediated the risks. To respond with "Why would you not immediately militarize the projects?" Shows he didn't understand it this way. But how could he mean it in any other way? I cannot think of a more responsible approach than "make things open source once we confirm it is safe".
@mradford10
@mradford10 3 ай бұрын
“We have agency”… That doesn’t fill me with confidence, as if humans don’t make poor, irrational, ill-informed, selfish or misguided decisions. Agency is part of the problem. We shouldn’t be playing with fire.
@ZeroIQ2
@ZeroIQ2 3 ай бұрын
I think of that joke where a man says to a genie "I wish for world peace" and the genie says "Done!". The man thinks he has done something great, but then soon learns the genie killed every human on the planet, because it was the easiest way of getting world peace.
@Yic17Gaming
@Yic17Gaming 3 ай бұрын
I am looking forward to all the benefits AGI will bring to the world. But I also think something bad will definitely happen. I'm just not sure the extend of how bad things will get.
@edmundkudzayi7571
@edmundkudzayi7571 3 ай бұрын
The needle in haystack result quickly unravelled. It appears it only pays super attention when the needle is specified, otherwise GPT4 pays more attention to everything when nothing is specified.
@samson_77
@samson_77 3 ай бұрын
I think, with LLM's we skipped a huge portion of the evolution ladder of neural networks and immediately reached the higher cognitive parts of our human brain. That's because, we didn't know, how to keep information streams in huge n-dimensional vector spaces stable, until some smart minds developed and at the same discovered a solution: attention & self attention. With that knowledge, we can now go down on the evolution ladder and build stable models for lower-cognitive functions, like basic planning, movement for robots, etc. I think that Transformers (or derivatives) are pretty suited for these tasks as well.
@Gretch_D
@Gretch_D 3 ай бұрын
I need help understanding when or why open sourcing AI even became a concept? We have an entire technological revolution behind us, with corporations keeping their intellectual property sealed until it is either leaked or no longer relevant. What makes AI any different than past transformative tech? The fact that open sourcing IS such a huge point of contention leads me to believe that the development of AGI will have an unprecedented effect on the world as we know it. Good or bad. The squabbles over who has access to the newest developments gives me fierce elementary school playground flashbacks. Thank you for dumbing all this down just enough to make me feel smart! 😂
@kyudoh
@kyudoh 3 ай бұрын
I have read Yann Lecun's book that accurately explains the mathematical concepts behind deep learning process...but also why AGI will be possible only if AI can have a FULL understanding of the World. For that, he is right we are far from having the technology and resources available. It is going to take another 20 years at least.
@masonlee9109
@masonlee9109 3 ай бұрын
Matthew, by the way if you haven't yet had a chance to read Hendrycks' full paper where he discusses the Leviathan, I highly recommend it! It's called "Natural Selection Favors AIs over Humans" (I'd add a link, but it's googlable by that name). Thanks for the video.
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
Matthew Berman
Рет қаралды 185 М.
What Is Q*? The Leaked AGI BREAKTHROUGH That Almost Killed OpenAI
35:17
Increíble final 😱
00:37
Juan De Dios Pantoja 2
Рет қаралды 105 МЛН
Is it Cake or Fake ? 🍰
00:53
A4
Рет қаралды 20 МЛН
Final muy inesperado 🥹
00:48
Juan De Dios Pantoja
Рет қаралды 18 МЛН
Does This Worm Prove We're In a Computer Simulation? 🤯
9:16
Matthew Berman
Рет қаралды 27 М.
NEW AI Jailbreak Method SHATTERS GPT4, Claude, Gemini, LLaMA
21:17
Matthew Berman
Рет қаралды 318 М.
Popular Technologies that Won't be Around Much Longer...
14:36
Sideprojects
Рет қаралды 104 М.
Sustaining the Equity Rally | Bloomberg Surveillance Radio | June 24, 2024
2:56:20
NVIDIA Unveils "NIMS" Digital Humans, Robots, Earth 2.0, and AI Factories
1:13:59
Iphone or nokia
0:15
rishton vines😇
Рет қаралды 1,9 МЛН
Телефон в воде 🤯
0:28
FATA MORGANA
Рет қаралды 1,2 МЛН
Samsung Galaxy 🔥 #shorts  #trending #youtubeshorts  #shortvideo ujjawal4u
0:10
Ujjawal4u. 120k Views . 4 hours ago
Рет қаралды 10 МЛН