"Man Who Thought He'd Lost All Hope Loses Last Additional Bit Of Hope He Didn't Even Know He Still Had" LOL
@krakow106 ай бұрын
The Onion doesn't miss
@tristenarctician69106 ай бұрын
Gone into hope debt
@mikeuk19276 ай бұрын
@@tristenarctician6910Nah, there is still more hope to lose. Just let the reality do its job :3
@Kenjuudo6 ай бұрын
@@mikeuk1927 I don't think you necessarily want that.
@AtilaElari6 ай бұрын
The horrible feeling of "Are you saying that _I_ am the most qualified person for the task? Are you saying that everyone else is even worse than I am?!". It is dreadful when the task in question is mundane. Its hard to comprehend when the implications of the said task is possibility of an extinction event. For what its worth, I think you are as qualified for this task as anyone can be in our circumstances. Go save the world! We are rooting for you! No pressure... Seriously though, looking at multiple comments where people are saying that they started doing AI safety as a career thanks to you shows that you ARE a right person for the job.
@buddatobi6 ай бұрын
You can help too!
@krishp11046 ай бұрын
this reminds me of the last two episodes of the three body problem on Netflix lmao
@JorgetePanete6 ай бұрын
it's*
@gabrote426 ай бұрын
Absolutely true
@gavinjenkins8996 ай бұрын
I mean, by DEFINITION, whoever the most qualified person is has that feeling, that doesn't really change the "implications" for us in general
@Respectable_Username6 ай бұрын
"Who the hell am I?" Well, you're a person with good reasoning skills who isn't tied to a corporate profit motive, who knows the topic very well, and who has been actively communicating about it for years! It can be intimidating being the subject matter expert for a really important topic, and it can weigh heavily on your shoulders, but you feel that weight because you _care_ . And what we need more than anything else is rational thinkers who have years of study in the topic who don't have a profit motive and who care. And you _won't_ get it right 100% of the time. But you've got the highest proficiency in this area in the current party, and so are more likely to roll above the DC than most others in this world! ❤
@clray1236 ай бұрын
Actually he is a tool with a much too high opinion of himself.
@imveryangryitsnotbutter6 ай бұрын
@@clray123 Well you two should get along swimmingly then
@clray1236 ай бұрын
@@imveryangryitsnotbutter You are trying to insult me, but your attempt is not making any sense. Try again harder.
@inaim26 ай бұрын
yes mentor the ppl with potential and try to get involved with the growth of AI we believe in you :)
@ivoryas16966 ай бұрын
@@clray123 Why are you trying to insult _him_ is _my_ question? I mean... he _pretty _*_clearly_* knows he doesn't know it all (otherwise, this comment wouldn't be responding to a _direct quote), and is willing to learn to improve himself... What's the problem?
@WolfDGreyJr6 ай бұрын
7:00 GPT-4 did not score 90th percentile on the bar exam. That figure is in relation to test-takers who already failed the bar at least once, and would be 48th percentile compared to the general population. GPT-4 was also not graded by people with training scoring bar exams. For further info and methodological criticism, refer to Eric Martínez' paper "Re-evaluating GPT-4's bar exam performance" in AI and Law.
@gabrote426 ай бұрын
Doing the good work right there. Have a bump
@Fs3i6 ай бұрын
It still beats *half* of lawyers, roughly. Half of them!
@taragnor6 ай бұрын
@@Fs3i It beats people at test taking, not practicing law. There's a difference. AI is naturally slanted well towards test taking because there's a lot of training data on previous tests and questions and so it can come loaded up with those answers already trained into it. It's the same with AI coders and their ability to pass coding interviews. The thing is that tests and the real world are very different things. Computers have been better at being databases than humans for a long time, so the fact that we can do information lookup isn't all that impressive.
@WolfDGreyJr6 ай бұрын
@@Fs3i I should clarify something I misconstrued after editing: the 48th percentile figure refers to the essay section, in total the exam score it was evaluated to have would be 69th percentile (nice), which is still barely passing. The population isn't lawyers, it's people trying to become lawyers. About a third don't manage in the first place. This still puts us at beating half of lawyers because maths but I needed to correct myself before moving on to the bigger issues. Plus, when the reported essay performance specifically is scored against those who passed the exam, it comes out to a rather unimpressive 15th percentile score, without even taking into question whether that score is a fair assessment. There are significant doubts to be had about the scoring methodology with which the exam score of 297 (still really good for an LLM) was arrived at. They were not graded according to NCBE guidelines or by persons specifically trained in grading bar exams, which is especially problematic for the MPT and MEE parts, which are actually intended to be a test of one's skill to practice law or elucidate how the law would apply to a given set of circumstances.
@badabing33916 ай бұрын
@@WolfDGreyJr i now wonder what the exact details of statements like various LLMs doing well on graduate level physics exams and contest level mathematics actually are
@RationalAnimations6 ай бұрын
WE ARE SO BACK
@WoolyCow6 ай бұрын
not me being like 'hey that voice kinda sounds familiar...oh its the ai guy, wait doesn't he do stuff with [checks comments] yeah that makes more sense'
@En1Gm4A6 ай бұрын
As for ai alignment my dream is a neurosymbolic task execution system with a knowledgegraph and visualization for the user of the suggested problem solving task with alternatives to choose from. Human in the driver seat. Misuse management by eliminating risky topics from the knowledge graph
@acethirtysix83786 ай бұрын
*finishes watching video* It's so over!
@AtomosNucleous6 ай бұрын
Proposal: Cut the part of the "random assignment team" in a short format. This can be viral, give more attention to this channel and its topics
@NicholasWilliams-uk9xu6 ай бұрын
Doesn't seem to remotely care about personal data harvesting and KZbin and it's influencer trolls using it harass individuals and leverage it for psyops.
@XIIchiron786 ай бұрын
"maybe I can hide my misunderstanding of moral philosophy behind my misunderstanding of physics" lmao
@XIIchiron786 ай бұрын
I have seen a confusing number of people unironically hold the view that "humans should be replaced because AI will be better" At what??
@kirktown20466 ай бұрын
@@XIIchiron78 Starcraft. What else matters?
@SianaGearz6 ай бұрын
I'd love to be able to understand your comment but i'm struggling. Any help? Edit: oh the post-it at 19:39, it wasn't legible when i first watched.
@XIIchiron786 ай бұрын
@@SianaGearz it was a little joke he put in during the Overton window segment
@xXCindellaXx6 ай бұрын
@@XIIchiron78 impact on nature maybe
@Badspot6 ай бұрын
LLMs are particularly good at telling lies because that's how we train them. The RLHF step isn't gauged against truth, it's gauged against "can you convince this person to approve your response". Sometimes that involves being correct, sometimes it involves sycophancy, sometimes it involves outright falsehood - but it needs to convince a human evaluator before it can move from dev to production. The "AI could talk it's way out of confinement" scenario has moved from a toy scenario that no one took seriously to standard operating procedure and no one even noticed.
@mindrages6 ай бұрын
Your second sentence is quotably spot-on.
@devoNo2good6 ай бұрын
This
@NicholasWilliams-uk9xu6 ай бұрын
Personal data harvesting and KZbin and it's influencer trolls using it harass individuals and leverage it for psyops.
@ChristianIce6 ай бұрын
An LLM can just be coincidentally right or wrong, it can't "lie". It doesn't know what the words mean, it repeats words like a parrot.
@lwmburu56 ай бұрын
@@ChristianIce the stochastic parrot model is disfavored by mech interp
@Gaswafers6 ай бұрын
Suffering from "fringe researcher in a Hollywood disaster movie" syndrome.
@MetsuryuVids6 ай бұрын
Don't look up.
@flickwtchr6 ай бұрын
Do you also refer to Connor Leahy as having a "Messiah complex"? Why is it so many AI bros go straight to the ad hominem attack rather than engage arguments?
@D_Cragoon6 ай бұрын
@@endlessvoid7952 This video includes an example of an ai, even in its current state, being able to formulate a strategy that involved misleading a human. That's what ai can already do. Many common objections to taking ai safety seriously are addressed in this other video of this channel here: m.kzbin.info/www/bejne/b5qUiJ-ZeNqXprc
@avidrucker6 ай бұрын
I thought this was a compliment and sober statement of the gravity of our reality 😅
@D_Cragoon6 ай бұрын
@@endlessvoid7952 Not sure why KZbin keeps deleting my comment. Anyway, this video includes an example of an AI having a plan that involved misleading a human. On this channel there is a video about 10 reasons people don't think we should be worried about AI and counter arguments to them (I won't link cos I think that makes KZbin delete my comment).
@johannesdolch6 ай бұрын
"The cat thinks that it is in the Box, since that it is where it is." "The Box and the Basket think nothing because they are not sentient." Wow. That is some reasoning.
@Mrpersonman06 ай бұрын
It's entirely accurate I agree.
@Alorand6 ай бұрын
The problem with fixing AI alignment problem is that we are already dealing with Government and Corporate alignment problems... And those Governments and Corporations are accelerating the integration of AI into their internal structures.
@ZappyOh6 ай бұрын
Yes ... All the money goes toward AI-alignment to government and corporations. It is hard to envision that as anything other than extreme dystopia :(
@Frommerman6 ай бұрын
The way I put it is that we know for a fact misaligned AI will kill us all because we've already created one and it is currently doing that. It's called capitalism, and it does all the things the people in this community have been predicting malign superintelligences would do. Has been doing them for centuries. And it's not going to stop until it IS STOPPED.
@Shabazza846 ай бұрын
Can't happen in my country yet. They still figure out how to "govern" without using a fax machine and to introduce ways to be able to actually use your electronic ID you got 14 years ago.
@NicholasWilliams-uk9xu6 ай бұрын
Personal data harvesting and KZbin and it's influencer trolls using it harass individuals and leverage it for psyops.
@cortanathelawless18486 ай бұрын
I mean Israel literally is using ai to kill enemy combatant's in their family homes
@NitFlickwick6 ай бұрын
Just remember, Rob, Y2K was mocked as the disaster that didn’t happen. But it didn’t happen because a lot of people realized how big of a deal it was and fixed it before the end of 1999. I really hope we are able to do the same thing with AI safety!
@wojtek4p46 ай бұрын
The scary thing to me is not that with Y2K, almost all people wanted it not to happen. But with -Y2K2 electric boogaloo- AGI risks, there are some people (Three Letter Agencies, companies, and independents), which want AI threats to happen, but controllably. That means that instead of all of the efforts focusing on mitigating the issue, we're fumbling around implementing random measures in hope they help - while those mentioned groups focus on making sure we're not doing that _to them._
@duytdl6 ай бұрын
Add Ozone disaster to that list too. We barely got away with it. If it had happened today dollars to donuts, we'd never have been able to convince enough people to ditch even their hairsprays. Internet (social media particularly) has done more harm than good.
@ChrisBigBad6 ай бұрын
"If our hygiene measure work, we will not get sick and the measures taken will look like they were never necessary in the first place."
@XenoCrimson-uv8uz6 ай бұрын
@@duytdl I disagree with that, without internet I wouldn't have know climate change was real because of everyone attitude was normal and not panicking
@GabrielPettier6 ай бұрын
I'm pretty sure it's one of the things he hints at at 44:15
@nastrimarcello6 ай бұрын
Autism for the win 20:40
@caleblarson69256 ай бұрын
Hey Rob! I just wanted you to know that I've been watching your videos for several years (all the way back to the stop button problem). This year you inspired me to apply to MATS to finally get involved in AI safety research directly, to which I've now been accepted! Thanks for making these videos over the years, you've definitely influenced at least one more mind to work on solving these extremely important problems.
@alexbistagne17136 ай бұрын
Congrats!
@deifiedtitan6 ай бұрын
That’s great, well done
@NicholasWilliams-uk9xu6 ай бұрын
Personal data harvesting and KZbin and it's influencer trolls using it harass individuals and leverage it for psyops.
@NicholasWilliams-uk9xu6 ай бұрын
If he actually wanted people to speak out, he wouldn't have said "autism" then split off to useless skinny nerdery talk. (he doesn't care, he sucks up to KZbin for paycheck, and harvest personal data and intellectual property for his content) btw (your intellectual property and personal data).
@carrotylemons11906 ай бұрын
Noita death noise made this even more terrifying than it already was
@SaffronMilkChap6 ай бұрын
Thank you - it was tickling my brain and I couldn’t place it.
@MetallicMutalisk6 ай бұрын
I noticed that too lol
@Brunoenribeiro6 ай бұрын
I thought it was a modification of the Majora's Mask noise. Maybe Noita took some inspiration?
@huhabab6 ай бұрын
I'm so conditioned to that sound I felt rage and disappointment about myself as soon as the sound played, Noita ruins lifes.
@kriskolish64236 ай бұрын
AGI Extinction = Skill Issue
@arnom18854 ай бұрын
We've got AI making art and writing poetry and people with triple jobs unable to affort rent or healthcare. Like global warming, it's not "we", "us" who are responsible for developments like these. It is a couple of thousand white old men and their multinational corporations. They will not be stopped because they think they are 'better'and they have the need to syphon even more resources and money to themselves. It would require an effort and unanimity of politicians all around the world which we've never seen before to call this development to a halt. Basically it means ending late-stage-capitalism. So, well...yeah....... (disclaimer: 50+ male, white and from Europe)
@NitFlickwick6 ай бұрын
First, the cat joke. Then the “depends on autism” Overton window joke. Glad to have you back! - Signed “a guy who is happy to ignore (ok, doesn’t even see) the Overton window”
@gavinjenkins8996 ай бұрын
You can also just be privileged/arrogant instead of autistic. it's your chance to put those things to good use! Like, at my current job, I know full well i can easily get another job, with my education and background, so I don't care at all about just slamming home brusque comments that are clearly true.
@kaitlyn__L6 ай бұрын
@@gavinjenkins899 that’s part of the thing though, isn’t it? In everyone else, it requires that kind of personality. That’s part of why us autistic people often get called arrogant when we’re trying to help others!
@ejayAD6 ай бұрын
Great Rob love this thank you!
@MrDoboz6 ай бұрын
also Elon jumping on changing planet lol
@AtomicVertigo_Comics6 ай бұрын
@@kaitlyn__L so true!
@LadyTink6 ай бұрын
I noticed, obviously when your fav ai safety channel disappears right when suddenly ai safety seems the most important thing xD
@Kenionatus6 ай бұрын
In today's news: dozens of influential AI safety researchers and journalists killed in series of plane crashes
@Eddie-th8ei6 ай бұрын
"just when the world needed him the most, he stopped uploading to his youtube channel"
@someguycalledcerberus98056 ай бұрын
I had been wondering if he's busy because he's working in one of the teams and simply doesn't have time or signed an NDA.
@darkzeroprojects42456 ай бұрын
@@Kenionatus Id not be suprised of came true.
@RoulDukeGonzo6 ай бұрын
I honestly thought he looked at gpt output being charming and was like, oh, I guess I was wrong
@reverse_engineered6 ай бұрын
Thank you Robert. I understand how difficult this must be for you. Imposter syndrome is very real and anyone with the knowledge and self-awareness you have would be well served by being cautious and skeptical of your ability to answer the question properly. But as far fetched as it may seem, you have all the right qualities to help: you are very knowledgeable about the area, you carefully consider your words and actions in an attempt to do the least harm as possible, and you are a recognizable and influential person within the public sphere. We need you and more people like you to be strong influencers on perception, awareness, and policy making. For anyone working in AI Safety, alignment is the clear problem, and we already know how governments' and corporations' alignments often prioritize their own success over the good of society. Folks like Sam Altman can sign all the open letters they want, but their actions show that they still want to race to be the first and treat safety as a distant third priority. I think the only hope we have is that enough emphasis is put into research and policy that we can figure out safety before the corporations figure out AGI. There is no way we are going to get them to stop or even slow down much, since that directly opposes their shareholders' interests. Policy and law aren't going to stop them; we have seen that numerous times throughout history and in many areas even today. Perhaps people on the inside could choose to defect or prioritize safety over advancement, but there are too many people excited to make AGI that all of the cautious folks who would blockade or walk in order to stop things would quickly be replaced by those who are too excited to care. What we need is knowledgeable and influential people making their way into the higher ranks of these corporations. We need people with real decision making power to be there to make decisions that better align with the good of society and not just short-term profit seeking. People like you. Good speed, sir, and thank you for stepping up to help save humanity from itself.
@UnPetitPoulet6 ай бұрын
5:24 Is this the death sound from the game Noita ? In Noita, players kill themselfs a LOT while trying to play god while combining dangerous (and often really dumb) spell combinations to make their magic wand as f*ing powerfull and game-breaking as possible. Now I can't help to see AI as a wand we are collectively tinkering and testing randomly. What could go wrong ? Spoiler: I had a run where I casted infinite spontaneous explosions that spawned on my ennemies. At one point I ran out of ennemies so it relocated on my character... Funniest shit, I'll do it again
@Frommerman6 ай бұрын
Lmao I literally just finished the sun quest for the first time. Nice to see a fellow Minä.
@awadafuk48636 ай бұрын
It definitely is. Had me shouting at my phone in Finnish 😤😤
@NicholasWilliams-uk9xu6 ай бұрын
Personal data harvesting and KZbin and it's influencer trolls using it harass individuals and leverage it for psyops.
@cameronforester84136 ай бұрын
Homing rocks are pascifist 🪨 ✌️
@cerocero28176 ай бұрын
After all, why not? Why shouldn't I put a larpa on my rapid fire nuke wand?
@x11tech456 ай бұрын
35:14 "OpenAI's super alignment team (that was dissolved) seems promising and deserves its own video" - combined with the horn blow - combined with the visual stimulus ("nevermind") - made understanding the spoken words difficult to understand. Thankfully, closed captioning was available.
@JB525206 ай бұрын
I think that was intentional, since the words became irrelevant. Anyone just listening might have heard outdated information without getting the joke.
@NicholasWilliams-uk9xu6 ай бұрын
Personal data harvesting and KZbin and it's influencer trolls using it harass individuals and leverage it for psyops.
@x11tech456 ай бұрын
@@JB52520 oh, I got the joke once I read the words in closed captioning... But the horn stopped me from even hearing the joke.
@drone_video98494 ай бұрын
Robert, not sure if you will see this but I was the one who spoke to you at the train station two weeks ago (leaving out station name and city on purpose) - just wanted to say thanks for sharing your time despite rushing to your meeting. You were very pleasant and generous with your time. Great content also! Looking forward to getting home and catching up on the last few weeks of videos I have missed while on the road.
@genegray98956 ай бұрын
1:15 No no. We noticed
@arbitool6 ай бұрын
True
@mellowsign6 ай бұрын
We cared.
@ClaimClam6 ай бұрын
i didnt
@LeoCage6 ай бұрын
I definitely noticed, but I figured he was an actual expert and busy.
@adfaklsdjf6 ай бұрын
I was sad
@pafnutiytheartist6 ай бұрын
The problem both sides of the argument seem to be mostly dismissing is economic: We will almost certainly create systems that can automate large enough percentage of human labor before we create any superintendent agents posing existential risks. This will lead to unemployment, inequality and small number of people reaping most benefits of new technology. OpenAI was a nonprofit organization with the aim to benefit humanity in general until they achieved some success in their goals and restructured to be a company optimizing shareholder income.
@juliahenriques2106 ай бұрын
The main overlap is that the same economic pressures that drive the obsolescence of jobs, the cohercive laws of competition, also drive the premature deployment of deficient AI systems to control tasks they're not yet ready for. The current example is autonomous vehicles, which still have a hard time functioning outside standard parameters, and thus have been documented to... run over people. On a larger scale, a limited AI system can already do lethal harm when put in charge of, say, an electrical grid, or a drone swarm. It's the same root cause leading to different problems.
@arthurdefreitaseprecht26486 ай бұрын
Very very well said, up!
@evrimagaci6 ай бұрын
It's good to see you back Robert. This video confirms what I've been seeing in the field too: Things are changing, drastically. Even those who were much more reserved about how AI will change our lives seem to have changed their points of views. By that I mean if you compare how "The Future of AI" was being talked about just a mere 1.5 years ago vs. today is drastically different among the scientists who know the field. I am not saying this to take a stab at the community, I think it is honorable to adapt the the landscape as it changes without our control. It just signals that AI (and its safety) is actually way more important than what has been portrayed to the public in the past couple of decades. We need to talk more about it and we need to underestimate it much less.
@gertal27643 күн бұрын
katiliyorum
@fritt_wastaken6 ай бұрын
"Sh*t's getting real" > Noita death sound is playing. Yeah, I feel you
@Zicore476 ай бұрын
Thats funny, because I'm playing Noita while watching this...
@selectionn6 ай бұрын
dying to fire and getting noita'd sounds more dangerous than AI
@Marquis-Sade6 ай бұрын
@@Zicore47 What is Noita?
@rehenaziasmen46036 ай бұрын
@@Marquis-Sade Its a 2d pixelated game of magic and alchemy and lots of dying
@Marquis-Sade5 ай бұрын
@@rehenaziasmen4603 Lots of dying? Sounds dark
@zoggoth6 ай бұрын
39:11 I appreciate the joke of saying that companies have to follow EU law while showing a pop-up that *still* doesn't follow EU law
@Nulley06 ай бұрын
Even the camera lost its focus.
@Hexanitrobenzene6 ай бұрын
Doesn't follow ?
@zoggoth6 ай бұрын
@@Hexanitrobenzene The one I was thinking of was that you can't emphasise "I agree" to get people to click on it, but I'm not 100% sure that's in every EU country However, basically every version of that pop-up breaks "You must [...] Make it as easy for users to withdraw their consent as it was for them to give their consent in the first place." (from gdpr eu, so definitely EU-wide) But who knows, maybe that website gives you a pop-up to delete your cookies too!
@JulianDanzerHAL9001Ай бұрын
13:05 and thats why testing ai on riddles generally includes rephrasing them completely because standard sets of riddles might be in trainign data
@drkalamity45186 ай бұрын
20:35 legit had me rollin, nice
@pegatrisedmice6 ай бұрын
😂
@TheOmzee6 ай бұрын
same lmao The ironic thing is that I failed the theory of mind test, I legit thought Sally would go to the box first before I thought more about it. T_T
@KelseyHigham6 ай бұрын
ahahaha
@FoxtrotYouniform6 ай бұрын
I posit that the reason AI Safety has taken so long to hit the mainstream is that it forces us to confront the uncomfortable reality that nobody is in charge, there are no grand values in governance, and even the most individually powerful among us have no idea what really makes the world tick day to day. Confronting AI Safety, which could be reworded as Superorganism Safety, makes us realize that we even have yet to solve the alignment problem in our governments and other human-staffed organizations like corporations, churches, charities, etc. The powers that be have very little incentive in that context to push forward in the arena of AI Safety because it inherently raises the question of Superorganism Safety which includes those organizations, and thus puts them at the forefront of the same "is this really what we want from these things" question.
@NealHoltschulte6 ай бұрын
How do I upvote this twice?
@Sal19816 ай бұрын
AI alignment is probably more about human alignment.
@tristan72166 ай бұрын
"what we want from these things" - there is no we any more, maybe there never was. There's a bunch of people who don't like or trust each other but happen to be geographically co located. This is the fundamental alignment problem no matter what you're trying to to align. Maybe they could align governments and AIs in Finland or Japan, I don't know. Maybe I'm just pessimistic because I'm in the US.
@Hexanitrobenzene6 ай бұрын
You raise a good point, but I don't think it's the main reason at all. Only for the philosophically oriented, maybe. The problem is that most people are practically oriented and consider things only when they confront them.
@elfpi55-bigB0O856 ай бұрын
"there are no grand values in governance" That's not true. Capitalism, colonial management and the expectations of a society derived from White Supremacist economic theory. There you go.
@martincollins66322 ай бұрын
Reminds me of that scene in Armageddon where Harry Stamper (the oil driller) asks Dan Truman (Nasa guy): What is your back up plan? And the reply is there is no backup plan. Best of Luck Mr Miles.
@CopingwithAI6 ай бұрын
"Admittedly, this particular researcher has a pretty poor track record predicting this kind of thing." I died😂
@-Rook-6 ай бұрын
That's pretty much everybody though!
@hellfiresiayan6 ай бұрын
@@-Rook- Yann is uniquely bad tho lol
@Sal19816 ай бұрын
@@hellfiresiayan The reason being he has this view of human faculty of being special. We're basically just pattern prediction machines, with added reasoning lodged into our prefrontal cortex. AGI systems would, for instance, not be fooled by optical illusions.
@darkzeroprojects42456 ай бұрын
@@Sal1981 "pattern prediction machines" Don't like people comparing people to machines.
@clintonbehrends46596 ай бұрын
@@darkzeroprojects4245 but thats how biology works though it's a cascade of chemicals and electrical systems of which is optimized by the enviroment to survive and reproduce now thats not to say it's alright to say justify genocide on the basis of "oh humans are just pattern recognition machines" but I would say nothing or at the very least infintesimaly little as to be negligible is a good justification for de-humanization (p.s. I wonder if we'll eventually have to change the term de- "humanization" to be more encompassing to things other than humans)
@maxwinga8396 ай бұрын
Hey Rob, I just finished watching this video with tears streaming down my face. Watching your transition from casual youtuber talking about fun thought experiments to stepping up as they materialize into reality was incredibly moving. What hit me especially was the way in which you captured the internal conflict around being ahead of the Overton window on AI risk. While I may be just some random person on the internet, I want you to know that you've had a serious impact on my life and are one of the catalysts for my career shift into AI safety, and I deeply appreciate you for that. I was midway through my Bachelor's degree in Physics at the University of Illinois (2020-24) when Midjourney and ChatGPT released in 2022. As a physicist, learning about AI from a mathematical perspective was fascinating and seeing the unbelievable results (that seem so unnervingly mainstream now) really hammered home how impactful AI would be. I started to have some concerns as I learned more, and eventually stumbled upon your channel in December 2022, quickly binging all of your videos and internalizing the true scale of the danger we face as a species. Shortly after, GPT-4 was released while I was staying on campus over spring break with a close friend. I remember distinctly the true pit of existential dread I felt in my stomach as I read the technical report and realized that this was no longer some abstract future problem. Since then, I added a computer science minor to my degree and used it to take every upper-level course on AI and specifically two on trustworthy AI, including a graduate course as a capstone project. I'm now going to be interviewing at Conjecture AI soon, with the goal of contributing to solve the alignment problem. I've missed your videos over the last year, and often wondered what you were up to (rational animations is great btw!) During this last year I've felt so many of the same conflicts and struggles that you articulate here. I've felt sadness seeing children frolicking with no idea of what is coming, I've been the one to bear the news about the immense dangers we're facing to those close to me, and I've struggled with believing these things intellectually while the world still seems much the same mundane place around me. Hearing you put these thoughts to words and seeing the same struggle reflected elsewhere means a lot to me, and I'm incredibly grateful to you for that. Your rousing speech at the end really moved me and was an important reminder that no matter how lost our cause may feel as yet more bad news rolls in, the only way for our species to prevail is for us to be willing to stand up and fight for a better world. I don't know where my future will lead just yet, but my life's work will be fighting for humanity until the bitter end. Thank you for everything Rob.
@flickwtchr6 ай бұрын
What a great comment and good luck with your interview at Conjecture. Connor Leahy and Rob Miles are my top favorite thinkers/voices regarding AI safety/alignment issues.
@tonyduncan98526 ай бұрын
That's Life, as expressed in the present, made available to all. It should be quite useful, one would think. Causality is inexorable, so hold on to your hat. Best wishes.
@cemacmillan6 ай бұрын
Great to see another describe the personal side of witnessing and coming to understand an emerging problem, and saying: "I'm going to drop what I am doing, retool myself and change the center of what they are doing for reasons other than mammon and the almighty currency unit." As Rob demonstrates in the video, paltry funding into research into AI safety in all of its subdomains, and correspondingly small number of persons actively working on the problem and the enormous problem space presented by circumstances. We are living after all in a world where a fairly small elite who have disproportionate influence in a super-heated segment of the economy are optimizing for a different goal: crafting the _successful_ model in a free-market economy model, a target very different from safety as the histories of automation, scaled, process-modeled industry optimizing return on investment show us. I'll stop there as I mean to be encouraging. :) Smart thinking, collaboration and effort remain our best tools to confront the challenge by asymmetric means.
@gavinjenkins8996 ай бұрын
This is too eloquently written, I'm actually concerned it is Chat GPT lol
@tonyduncan98526 ай бұрын
@@gavinjenkins899 You should be concerned that you might be the same. Or something.
@juliusapriadi6 ай бұрын
Don't ever worry when your government asks you for help. Any wise decision should involve an expert panel, to safeguard against individual biased and errors. So you're expected to make mistakes, and that's fine.
@eldarad6 ай бұрын
04:26 I just enjoy thinking about the day Robert setup his camera and was like..."right, I'm now going to film myself looking deep in thought for one minute"
@SeamusCameron6 ай бұрын
The whiplash of LLMs being bumbling hallucination machines a lot of the time, while also showing surprising moments of lucidity and capability has been the worst part. It's hard to take a potential existential threat seriously when you keep catching it trying to put it's metaphorical pants on backwards.
@flickwtchr6 ай бұрын
Over and over and over again, people like Rob Miles, Connor Leahy, Geoffrey Hinton and others have repeated that they don't believe the current most advanced LLMs pose an existential threat. The do however point to the coming AGI/ASI in that regard.
@ClaimClam6 ай бұрын
@@flickwtchr advanced AI will SAVE lives, people that stand in the way are guilty of murder
@ekki19936 ай бұрын
It's always hard to be reasonable with small chances of extreme risks because humans are intrinsically incapable of properly gauging that. It's why casinos exist.
@DeruwynArchmage6 ай бұрын
@@flickwtchr you’re absolutely right. And so is @SeamusCameron (and many other commenters here). But it doesn’t matter. In some ways, the very thing that Seamus pointed out is precisely the problem. It was powerful enough to get everyone’s attention. The people who really understood got very concerned. But people paid attention… and they saw it occasionally “putting its pants on backwards”. They didn’t draw the conclusion, “Holy crap! It’s getting more powerful really fast. This is the stupidest they’ll ever be. Soon (single digit years), a future one may be smarter than any of us, or all of us put together. That has the chance to turn out really bad.” Most didn’t even think, “Wow! I see where this is going. It really might take almost everyone’s jobs!” They thought, “El oh El! Look how dumb it is! I saw people talking about this one way that will make it look dumb every time. And oh look, there’s another. I can’t believe they made me worry for a moment. Clearly, all of these people are crazy and watched too much SciFi. If there was a real problem, then the government and the big corps would be doing something about it. It’d be in the news all the time. Even if I ever thought things could go bad, it’s easy to let it slide to the back of my mind and just live my life. Surely nothing bad could *really* happen.” Maybe that’s not everyone, but I hear it enough, or just see the apathy, that I’m pretty convinced most people aren’t taking it seriously. If it were foreigners who had banded together and were marching towards our country with the stated plan of working for essentially nothing, we’d be freaking the **** out. If we knew aliens were on there way and had told us they’d blow us all up and the governments all said, “Gee, we think no matter what we do, we’re almost certainly going to lose.”, people would be losing their minds. But we’re not. We’re hubristic. I can’t say how many people have said machines can’t be smarter. Or argued how they don’t have a soul (as if that would make any difference, even if souls were a thing.) And we don’t like thinking about really bad things. That’s why religion is such a thing. People are scared of dying. So we dress it up. We try not to think about it. We find ways to cope. And that’s just thinking about our own personal mortality. It’s almost impossible to truly wrap your mind around *everyone* dying. It’s hard to truly feel the gravity of real people dying by the 10s of thousands right now because it’s half way around the world. It seems so distant. So abstract. And it’s happening. Right this second. You can watch the videos. The only way I can even approach coming to grips with it is thinking about the people I love being impacted by it (whether it’s merely their careers or their very lives). It’s a hard thing. I know how Rob feels. I’ve got some ideas that might work (mechanistic interpretability stuff), and it’s hard for me to even pursue them.
@gavinjenkins8996 ай бұрын
I don't think LLMs are EVER a threat, however they've already moved on from LLMs. Like he mentioned, the new "Chat" GPT is cross-trained on images as well. So it's not an LLM. So we aren't protected by limitations of how smart you can get by reading books alone. If you can get books, pictures, videos, touch, sound, whatever, then there's no obvious limit anymore.
@prestonshort63249 күн бұрын
I remember when my dad said “Son, you are unique…just like everyone else” Robert stay in the fight. Your voice may be the one that lengthens the pause that extends humanity.
@naptime_riot6 ай бұрын
I started watching your videos years ago, and you're the person I trust the most with these questions. I absolutely noticed you disappeared. This is not some parasocial BS, just the truth. You should post as much as you want, but know that your voice is wanted and needed now.
@ZevIsert6 ай бұрын
Attempting to finish the sentence (I think intentionally) left in that cut following 20:30, it'd be "The ability of our society to respond to such things basically depends on aut[ism existing in our species, so that these kind of things are more often said out loud]." Which, if thats actually what Rob said in that cut, would be a really beautiful easter egg to this video. Edit: "can be said" -> "are more often said".
@DevinDTV6 ай бұрын
This is certainly a virtue of autism, but it lets non-autistic people off the hook too much, imo. You don't have to have autism to reject conformity in favor of rationality. Conforming with an irrational belief or behavior is actually self-destructive, and people only do it out of an avoidance of discomfort.
@singularityscan6 ай бұрын
I am autistic and the need to inform a group so the collective knows all the facts, is a strong urge and motivation. As is being wrong or corrected by the group, it's not a attack on me it's just me getting the same info as the group.
@NicholasWilliams-uk9xu6 ай бұрын
Personal data harvesting and KZbin and it's influencer trolls using it harass individuals and leverage it for psyops.
@anthonybailey45306 ай бұрын
It's truly a spectrum. "If you know one autistic person, you know one autistic person" etc. But the insight holds, and I loved the joke. More generally, huge ❤ for the whole video.
@pierrebilley2766 ай бұрын
Guys don't forget to watch the video, not just listen !
@coltenh5816 ай бұрын
That scene around the “Community” table was so great. Awesome work.
@MeppyMan6 ай бұрын
My big concern is there is a lot of marketing BS in the field, and it’s being used to ignore more pressing problems, that we know are going to happen and are a risk to humanity.
@MeppyMan6 ай бұрын
Also 20:40 lol. And yes please to the video on AGI definitions.
@whatisrokosbasilisk806 ай бұрын
Ironically, including AI Safety itself.
@flickwtchr6 ай бұрын
The "marketing BS in the field" indeed detracts from people taking serious the risks of the coming AGI/ASI systems. But I don't think that was what you were getting at.
@ClaimClam6 ай бұрын
Yes, these AI scare announcements are just about tech firms hyping investments, and getting government to add barriers the competition. AGI will SAVE lives, impeding it is criminal.
@MeppyMan6 ай бұрын
I guess my position is that we should take it seriously and plan accordingly. But not at the expense of focusing on things like climate change and political instability, etc. It’s so hard to predict what is going to happen with tech progress. What we plan for now might be completely irrelevant with whatever comes next.
@DarkestMirrored6 ай бұрын
I actually have a pair of questions I'm curious to see your take on answering. 1.) Is any serious work on AI alignment considering the possibility that we can't solve it for the same reason that /human/ alignment is an unsolved problem? We can't seem to reliably raise kids that do exactly what their parents want, either. Or even reliably instill "societal values" on people over their whole lives, for that matter. 2.) What do you say to the viewpoint that these big open letters and such warning about the risks of AI are, effectively, just marketing fluff? That companies like OpenAI are incentivized to fearmonger about the risks of what they're creating to make it seem more capable to potential investors? "Oh, sure, we should be REALLY careful with AI! We're worried the product we're making might be able to take over the world within the century, it's that good at doing things!"
@fartface89186 ай бұрын
its less marketing fluff and more trying to trick lawmakers into letting laws be made with openAi on the top of the pile, the same way regualions around seach engines made with google at the top favor google because it rases the barer to entry for a competitor, if regulation is made 5-10 years from now when openAi is doing worse off the company would be doing worse and so must make letters like this as is its legal obligation to maximize shareholder profits, this is in addion to the normal big company thing of regulations lose you less money if you in the lawmakers ear rather then a activist trying to do whats right/safe/good, because of these factors in addition to what you said no pr statements by openai should be taken as fact
@taragnor6 ай бұрын
Yeah honestly most of what's going on with OpenAI is a ton of hype. That is what the stock prices of companies like OpenAI and NVIDIA thrive on.
@MisterNohbdy6 ай бұрын
1) I wouldn't say human alignment is "unsolved". Most people are sufficiently aligned to human values that they are not keen on annihilating the entire species; the exceptions are generally diagnosable and controllable. That would be a good state in which to find ourselves with regard to AGI. 2) The letters are mostly not written by such companies; Robert goes through many of the names of neutral experts who signed them in the video. Some hypothetically bad actors in the bunch don't negate the overwhelming consensus of those who have no such motivations.
@juliahenriques2106 ай бұрын
Both are very good points, and while the first might remain forever undecided, the second one has already been proven factual by autonomous vehicles. While in this case it's more a matter of artificial stupidity, it's still proof that AI safety standards for deployment in the real world are faaaaar below any acceptable level.
@taragnor6 ай бұрын
@@juliahenriques210 Well when you're talking about AI safety, there's two types. There's the "How do we stop this thing from becoming Skynet and taking over the world?" and there's "How do I keep my Tesla from driving me into oncoming traffic". They're very different problems.
@manark12346 ай бұрын
1:53 It's worth noting that there are likely shockingly few AI safety researchers because it costs so much to get to the point where anyone would consider you a genuine researcher, and so it creates the perverse incentive to try to make that money back.
@humanaku91356 ай бұрын
The Overton Window self-reinforcement was a scary thought I never considered before. It must be terribly annoying to be an expert who has to temper his opinion to "fit-in"
@jameslincs6 ай бұрын
Maybe experts need more courage
@Jablicek6 ай бұрын
@@jameslincs Maybe they need not to be shouted down/mocked for raising concerns, and especially we need real protections for whistleblowers.
@gasdive6 ай бұрын
See also climate change... What climate scientists say off the record isn't what makes it into IPCC reports.
@TomFranklinX6 ай бұрын
@@gasdive See also IQ research.
@useodyseeorbitchute94506 ай бұрын
It's a common problem. Cancel culture is not only very good at fighting any heresy but also on fighting reality.
@GermanTopGameTV6 ай бұрын
We have been building huge AI models that now run into power consumption limitations. I think the way forwards is to build small agents, capable of doing simple tasks, being called up by superceding, nested models, similar to how our biology works. Instead of one huge model that can do all tasks, you'll have models which are able to do specific small tasks really well, and have their neurons only called if a bigger level model needs their output. Our brain does this by having certain areas of neuron bundles that do certain tasks, such as "keeping us alive by regulating breathing", "Keeping us balanced", "Producing speach" and "Understandig speach" and many more, all governed by the hippocampus, that can do reasoning. People who have strokes can retrain their brains to do some of these tasks in different places again, and regain some of their cognitive ability. This leads me to belive that the governing supernetwork does not have the capacity and ability to actually learn the fine details the specialised areas do very well. A stroke victim who lost a significant part of their Wernicke Area may be able to relearn language, but will always have issues working out the exact meaning. I'd bet our AGIs will recieve similar structures, as it could significantly speed up the processing of inputs by only doing a trained scan of "which specialised sub AI will produce the best output for this question?" and then assign the task there, while also noticing when a task doesn't fit any of the assigned areas and then, and only then use the hippocampus equivilant to formulate an answer. This architechture might also provide the solution to safety - as by training solely network components for certain tasks, we can use the side channel of energy consumption to detect unpredicted model behavior. If it was trying to do things it's not supposed to, like trying to escape it's current environment, it won't find a pretrained sup-AI that can do this task well, and would need to use it's expensive high level processes to try to formulate a solution. This will lead to higher energy usage and can be used to trigger a shutdown. I might be wrong though. I probably am.
@napdogs6 ай бұрын
I want to see this idea explored. I think the most difficult thing would be the requirement to outline and program consciousness and subconsciousness of these separate elements to facilitate true learning while allowing noninvasive intervention. As the video showed the language model can show a "train of thought" to make decisions and so there would need to be multiple layers of "thought", subconscious decision making and micro agent triggers to effectively operate as this fauxbrain AGI. Ensuring essential functions only operate with no awareness sounds like a strong AI safety feature to me. Like how "You are now breathing manually" triggers obvious measurable unnatural breathing patterns. Very compelling.
@NoName-zn1sb6 ай бұрын
way forward
@elfpi55-bigB0O856 ай бұрын
that's just a computer program but with an inefficient word processor tacked onto it
@edwardmitchell65816 ай бұрын
I think this is possible if we can extract out the parts of these large models. The first part to extract would be encyclopedic knowledge. Imagine if you could swap this out to have the model have only knowledge available in 1800. Or if you wanted to update it with the most recent year. Or if you wanted it to only know what the average Republican from Indiana knows.
@Thespikedballofdoom6 ай бұрын
god dammit you invented litral ai cancers
@Speed00127 күн бұрын
13:17 or much simpler, it recognizes it's the same question slightly rephrased 20:36 we can always sometimes rely on one random person's niche intrest and seemingly uncapped amount of money, time, and skill. 21:10 the cube doesn't grow or shrink, it moves. 41:23 For "good". Fairly concerning when a politician says something should be used for good. 41:43 They are looking for help, website link.
@bazoo5136 ай бұрын
22:08 - Heh, kudos for both ignoring Musk and calling Wozniak "the actually good Steve from Apple" 😀
@Z3nt46 ай бұрын
Elon is out the window.
@totalermist6 ай бұрын
@@Z3nt4 Could have something to do with Musk being the biggest hypocrite on that list. Warning about AI, yet collecting billions to build the biggest AI supercomputer... He basically did a full 180 on the topic.
@shayneweyker6 ай бұрын
The bit where Elon started to raise his hand when Rob asked if he could get another planet was comedy gold.
@svenhoek6 ай бұрын
Ketamine is bad kids, mkay?
@anchor836 ай бұрын
So funny. 😄
@azaria29776 ай бұрын
Literally this channel just popped in my head. When i looked it up there's a video 10 hours ago after a year. How lucky am I?
@RoulDukeGonzo6 ай бұрын
He was waiting for you
@darkaurumarts49315 ай бұрын
Is nobody catching the Bo Burnham reference in the intro?
@RobertMilesAI5 ай бұрын
Amazingly few people yeah
@failgun16 күн бұрын
@@RobertMilesAIplease put your uke covers online, I was sad to see there was no link in the description
@justinsheppherd18066 ай бұрын
Can't help thinking that the first instruction form a proper AGI would have been "First, hard-boil the eggs" ;)
@WoolyCow6 ай бұрын
lies, its obviously the chicken-maximiser! use the dna from the eggs to grow new chickens who lay more eggs who make more chickens who you mutate to have hands to hold the book and the laptop and the nail...far simpler really
@Tymon00006 ай бұрын
If u hard boil an egg it will roll easier
@Huntracony6 ай бұрын
Now I'm imagining an AI competing in Taskmaster
@GormTheElder6 ай бұрын
You have just provided the data point making sure it will 😅
@o1-preview6 ай бұрын
the problem is on the instructions, it doesn't know the size of the book.. or the size of the laptop.. but also, putting the eggs under the laptop is not very smart
@pooroldnostradamus6 ай бұрын
4:27 I like how choosing to wear a red shirt in the main video meant that wearing it for the role of the "devil" wouldn't be viable, so a dull, no less malicious looking grey was given the nod.
@RobertMilesAI6 ай бұрын
Oh, he's not the devil, he's the voice of conformity, of course he's in inoffensive grey :)
@pooroldnostradamus6 ай бұрын
@@RobertMilesAI It's the conformity that's going to get us in the end. I stand by my initial guess;)
@christophstahl81696 ай бұрын
everybody knows that redshirts are the first to die...
@MortenSkaaning6 ай бұрын
9:45 if the table is made of ice, or an air hockey table, the object wouldn't move with the table. If the object is a hard sphere it won't move with the table either. It depends the relative static friction between table and object. Or the dynamic friction if they're moving a little.
@robertreid22416 ай бұрын
i think another problem with the "6 month pause for safety research" is that we're trusting AI developers (all of whom are large private entities) to a) stop doing something that they feel is making them money and b) actually carry out high-quality safety research. big tobacco, the sugar industry and the fossil fuel lobby have shown us that we can't trust large private entities to do good research where the outcomes of genuinely good research into a given area would point towards policy that harms their profits. if the conclusion of this hypothetical research period is that general AI is likely to be an extinction-level risk which will be very difficult to mitigate, how can we be sure that these AI developers will actually publish that research, or will respond effectively by mitigating it or halting development permanently?
@vaultence98596 ай бұрын
Besides huge incentives not to publish research exposing potential dangers, you also can't really do a 6 month pause with private entities. If you try, they'll simply continue developing new and bigger models but release them as soon as they can get away with after the pause. In effect, all it does is stop models from being released during the window and perhaps a short time after. Worse, it could have the opposite of the intended effect for improving safety research. Any safety research that does happen will be similarly disincentivized both for the reasons you outlined and because any published research on the actual latest models proves the firm didn't follow the pause. So, any research that is published will be on the last public models, up to 6 months out of date.
@geraldtoaster85416 ай бұрын
@@vaultence9859 so what ur saying is that we have to drone strike data centres (sarcasm) (probably sarcasm)
@vaultence98596 ай бұрын
@@geraldtoaster8541 Drone strike? You need more imagination! I was going to propose we liquify the data centers, mix them into smoothies so that they're at least dubiously edible, and drink them to recoup some of that sweet sweet knowledge juice.
@chiaracoetzee6 ай бұрын
If this really happened I think the result would be a lot of research saying "if we just do X, AI safety will be adequately addressed". Then they apply some money to doing X for a little while, and look like responsible citizens, like BP does for their renewables research, without really letting it influence their main business.
@geraldtoaster85416 ай бұрын
@@vaultence9859 I do love smoothies. But that sounds like a lot of work, can't we just build a smoothie maximizer
@bosstowndynamics54886 ай бұрын
I think it would be worth talking more about interim AI threats as well. It's something you've mentioned in passing previously and discussed specific instances of, but narrow AI systems already pose massive threats to a lot of people due to being deployed to perform tasks they're not yet capable of doing adequately by organisations that don't care (eg the already well and truly established practice of using AI models to guide policing that are trained on data collected from previous biased policing, which winds up laundering racism and even amplifying it if the resulting data is fed back into the model, the very rapid replacement of a lot of human driven customer support with GPT based bots that are configured to just refuse to help if your problem is even slightly outside of a very narrow scope with many making it impossible to access actually useful help, etc). Don't get me wrong, the existential threats are important, but discussing interim threats both sheds light on issues that are happening right this second, and discussing them in the context of progressing AI capability highlights the plausibility of the existential threats as well, plus alignment applies to both (it also feeds into the *other* existential threat of AI, which is that there's an additional alignment problem where AGI might be perfectly aligned with its creators but their goals in turn aren't aligned with humanity at large).
@TheLaughingDove6 ай бұрын
This this this
@ianm14626 ай бұрын
Correct. This technology is largely developed by people who see themselves as the Meths from Altered Carbon. For every high-minded idealist working on the model, there are 2-3 vampires waiting to use it to make life more wretched.
@ekki19936 ай бұрын
I don't know if that's comfortably in his area of expertise. Current LLMs are barely "AI" and most of the problems in use cases intersect very heavily with politics, economics and other social sciences. He seems to be specialized on long-term AI safety, which is why his insight seems to be limited to "this isn't AGI yet, we should be careful about the future, I don't know if the 6 month moratorium is a reasonable timeline and you should see what other people have to say about it".
@bosstowndynamics54886 ай бұрын
@@ekki1993 This is definitely not true, Robert has spoken in detail on many occasions about AI safety hazards that apply to narrow scope AI systems, it's just that he's always spoken of them in hypothetical and research contexts, and AI systems like LLMs are far more broadly deployed now compared to the last time he discussed narrow AI safety.
@ekki19936 ай бұрын
@@bosstowndynamics5488 Which part? LLMs don't make decisions and just follow simple prompts. They are AI by a broad definition but not the kind that's close to AGI. Robert has consistently dodged talking about the social, economic and political interaction of tech and policy, precisely because it's not his area of expertise. The deployment of LLMs has very interesting social impacts, but Robert isn't the expert to talk about them.
@livefromhollywood1943 ай бұрын
I came back to this video, remembering I found a lot of the dry jokes really funny but couldn't remember which ones. I still burst out laughing at the "change planets" gag. Top tier dry humor.
@thomasschon6 ай бұрын
I noticed when you were gone because your views on these topics are among the most sober and important ones. I watched your channel and shared your concerns long before the Large Language Models arrived.
@TheInsideView6 ай бұрын
"it's 2024 and I'll you who the hell I am I am robert miles and I'm not dead not yet we're not dead yet we're not doomed we're not done yet and there's a hell of a lot to do so I accept whatever responsibilities falls to me I accept that I might make... I mean, I will make mistakes I don't really know what I'm doing But humanity doesn't seem to know what it's doing either So I will do my best I'll do my best That's all any of us can do And that's all I ask of you" goosebumps here welcome back king (i mean rob, not charles)
@KurtvonLaven018 күн бұрын
Rob, as one of the random no-name signatories of the first open letter you mentioned, you have my complete confidence. Sure, you may not singlehandedly save the world, and you may even make some mistakes, but I would much rather die knowing that we put our best foot forward, and as humanity's best pinky toe, you, good sir, are a part of that.
@Adam-el5gb6 ай бұрын
I like the Community study room reference at 31:21!
@gabrote426 ай бұрын
0:07 THE RETURN OF THE KING! I honestly think that your instrumental convergence video is the one I shared most in my 300 video "arguments for arguments" playlist, because agent AI is so important this days. Glad to have you! 0:22 As hilarious as ever! 1:19 I did and I did, and I don't mind the January record date, I have been keeping up with one other channel on the topic 2:22 I have a few ideas, but it's definitely up there 3:57 I personally disagree, but I am probably insane if I have that opinion. I am very much a "we rose to the top of Darwin's Mountain of corpses thanks to the efforts of our predecessors, no way am I not facing this challenge today" kinda man, and until I croak I am all for meeting the challenges of life, even while my country collapses, as our ancestors did before us XD. But I can see why that would chill you from videomaking, and you have my full sympathy. 5:16 The fragments of my brain that like to pretend to be separate from me just said: "That aged like milk bro" 13:41 I still find that response hilarious. I love the transition you are doing. Fullest support and tokens of appreciation! 15:24 I still find this hilarious and horrifying, I laugh in terror, as they say. 17:09 Yes I do want to see it, very good for using as proof. 18:08 Called it! Ha! I am lauging so hard rn. Back in 2020 I was making the argument that it would be in 20 years, and now... LOL. My faily members already rushing to use AI effectively before they lose their jobs in 3 years +19 Nice, another concept I use a bunch 20:33 This is a theme in most of my work. Self-reinforcement loops of deception, the Abilene Paradox, and Involuntary Theatre. Veyr useful for analyzing communication (my job, hopefully), and Start Again: a prologue (a videogame, prototype for a bigger game I have not finished yet XD) 20:46 As an autistic person myself, I agree and thought it was in good taste, but the follow-up to that article has not been written 25:46 That one was a pleasant surprise as well 27:10 Very charitable of him. I don't know if it's selfless, but definitely useful and nice. 29:00 You can't get a more representiative case than that! 34:10 That os far too real I am dying of laughter. 35:16 I almost spit water over my expensive keyboard XD 38:54 Only after all the other ideas are done, or not at all, I can watch the long one 39:16 So hyped for the Ross Scott campaign, but this is super hype. I like to read long stuff but 100 pages of law is too much. I'll read summaries 42:23 You will do much good however you choose to do it. I believe in you! 44:01 "INSPIRATION AND IMPROVEMENT!" - Wayne June, as The Ancestor, Darkest Dungeon. I will be doing the activism, probably. The perks of having hundreds of semi-important friends! Just as soon as my country stops collapsing, or next year at the latest
@supremeleader98384 ай бұрын
love how he skips elon musk at 22:15
@bennie_pie6 ай бұрын
Rob, thank you for this video! I noticed your absence but there is more to life than youtube and I'm glad your talents are being put to good use advising the UK government. I'm as surprised as you are at the UK seems to be doing something right considering the mess our government seems to make of everything else it touches. Thanks for levelling with us re your concerns/considerations. AI alignment has been looming more and more and it's good to have your well considered views on it. I have a UK specific question - we've got elections coming up next month and I wondered if ytou had ay views on how that might affect the work the UK is doing and whether any particular party seems to be more tuned in to AI safety than any others and would value your opinion. I will pose the question to the candiidates I can vote for but thought I'd ask as you are likely more in the know that I am!
@jamieclarke3216 ай бұрын
Id be interested to hear robs take on this as well.
@empty_headed6 ай бұрын
6:00 Haven't finished the video yet, but the RedPajama-Data-v2 dataset is 30T tokens filtered (100T+ unfiltered), and that's a public dataset. OpenAI likely has a much larger set they keep private. GPT-4 could very "easily" be trained on 13T or more tokens.
@viviblue727717 күн бұрын
This is the first time I’ve been so blatantly insulted by valuing the truth over how people think about me, and then you immediately complemented me about it. Not sure what to think of that.
@dmitryburlakov69206 ай бұрын
Thanks for the update. To be honest, I'd probably given up early access to get this video to as much people as possible right now. Even better if Patreon included budget tier that would be spent on promotion. I understand there's a lot of real work, but just the awareness of a threat is not a solved problem. I don't think there's even a minimal level of conceptual understanding of the threat in general public, and I don't think there's anyone raising that awareness better than you.
@liamjennings73806 ай бұрын
i messaged you a few weeks ago asking how you refrain from despair, i didnt get an answer but this really was more than enough, the optimism at the end is infectious. thank you.
@WhitePillMan6 ай бұрын
When the world needed him, he returned. Please keep making videos Robert. You are one of the best explainers of the subject by far.
@impulsiveDecider6 ай бұрын
OMG I CAN'T All the little parts of the Bo Burnham song in the script hahwhwhwhhw
@Alexander_Sannikov6 ай бұрын
"Proposing a 6-month pause is actually harmful because it creates a false impression that the AI safety problem can be solved in that amount of time" This is great. I didn't read that article, but it's great that somebody did put this into words. Unfortunately, what they're proposing (a complete moratorium on AI research) is completely impossible to enforce in our reality, and no amount of window stretching can fix that until the existential threat is apparent enough to everybody.
@edwardmitchell65816 ай бұрын
6 months is enough time to ask for an extension.
@tyranneous6 ай бұрын
Rob - great video, very glad you're not dead! And incredibly timely, as while I don't currently work in the field and have merely been an interested amateur, but a potential near term job move will mean I'll likely be in conversation with more UK AI regulatory type folks. We'll see, exciting times ahead. In the meantime, thank you for your work on this so far and your accepting of the responsibilities ahead. Yes, it's daunting, but frankly I for one am glad you're on our side.
@AsbjornOlling6 ай бұрын
Is the outry music an acoustic cover of The Mountain Goats' "This Year"? The chorus of that song goes "I am gonna make it throug this year, if it kills me" Very fitting. Cool.
@XIIchiron786 ай бұрын
The best analogy I have come up with for current models is that they are basically vastly powerful intuition machines, akin to the human "system 1" thinking. What they lack is an internal workspace and monologue ("super ego") capable of performing executive reasoning tasks, akin to the human system 2 thinking. The thing is, it doesn't seem very difficult, then, to just arrange a series of these current models with some specialized training to produce that kind of effect, replicating human capability completely. That's basically how we work, right? Various networks of intuition that bubble upward from the subconscious into our awareness, which we then operate on by directing different sets of intuitive networks, until we reach a point of satisfaction and select an option. I think we might actually be barely one step away from the "oh shit" moment. All we would need to do is create the right kind of datasets to train those specialized sub models, and then train the super-model in using them, maybe even with something as simple as self-play. Really, the only limitation is the computing power to handle that scale of network.
@dirkie93086 ай бұрын
i did notice, and I did care. i searched your channel just last night for new uploads. You have the best and most rational explanations around AI safety and risks I have been able to find. Thank you, and keep up the good work!
@TheInsideView6 ай бұрын
yooo that's the scale maximalist t-shirt at 4:22! Robert Miles in 2022 when receiving the scale maximalist t-shirt on the the inside view: "not sure I'll wear it", "have you ever seen me in a t-shirt?" Robert Miles in 2024: wears the shirt in a 45m video announcing his comeback to portray is inner scale believer
@TheEvilCheesecake6 ай бұрын
Is this the "scalie community" i keep hearing about.
@stevenmcculloch57276 ай бұрын
I remember this from your interview with him lol, glad he came round to wearing t shirts!
@BMoser-bv6kn6 ай бұрын
"Now we are all scale maximalists." - Kenneth Bainbridge Capital probably wasn't all that interested in dropping half a trill on making a virtual mouse, but they sure do seem hella interested in making a simulacra of people.
@Maxime-fo8iv6 ай бұрын
13:48 Honestly, I wouldn't be so quick to dismiss the answer of GPT-4 when it comes to transparent boxes. It's true that you can see the inside of the boxes, but you still need to look at them for that. And since Sarah put the cat in the carrier, that's probably where she'll look for it first ^^ To be precise, I think the answer depends on how close to each other the containers are, it's still possible that they are so close to each other that you can immediately see where the cat is without "looking for it", but I don't think it's obvious that it would or wouldn't be the case. So, my ratings: - human: incomplete answer - GPT-4: incomplete answer
@aa.bb.90536 ай бұрын
…or the GPT answer describes the immediate period when Sarah “comes back”, which has an infinite number of moments in which she is realistically “looking for” the cat where she left it. It’s only upon updating herself on her surroundings that her expectation should change. Such tests are testing for utility to human users, not for accurate modeling. There are innumerable scenarios similar to the one you mention… for example, is Sarah visually impaired (despite being able to “play tennis”)? Is the transparent carrier floating in the air in front of her, or is it sitting on one of a number of objects that could distract one’s visual processing for a few moments, as in the real world? Are there such distracting objects in the line of sight or field of view generally (as in the real world)? Is the cat’s stillness & coat pattern blending into that background? We are notoriously bad at visual processing & retention; nature mainly selected us to recognize faces & to notice movement in the tall grass. Many such real-world factors would severely alter Robert’s model… but wouldn’t alter GPT’s, because it’s probably optimizing for the whole range (making GPT’s answer more realistic… beyond even the initial moments of “coming back” to look for the cat, which imo GPT modeled correctly & it’s the average human who presumes too much). Sarah & Bob probably have a social understanding (given they occupy the same place where a cat is being “stored”) which extends to the care of cats… does that influence where Sarah might initially look for the cat? The tendency to reinforce in GPT responses that reflect our social histories & our evolutionary history, both of which streamline & simplify our intuitions about the world & each other… will this tendency make AI’s better at offering us a mirror to ourselves, while effectively understanding us better than we understand ourselves? Doesn’t bode well.
@HildeTheOkayish6 ай бұрын
about "passes the bar exam" there has been a more recent study bringing some more context to that figure. ofc this study was quite recent and after when this video was made but still thought it worthy to bring up. the graph you have shows it being in the 90th percentile of test takers. but it turns out that is only for "repeat test takers". those who have failed the first attempt. it scores in the 69th percentile for all test takers and 49th percentile of first time test takers. the study also noted "several methodological issues" in the grading of the test. the study is called "Re-evaluating GPT-4’s bar exam performance" by Eric Martínez
@EternalKernel6 ай бұрын
The problem, is capitalism. I agree it's important to slow down and take on AGI in a more deliberate manner. But because of capitalism, this is just not going to happen. 90% of the people who would like to work on slowing things down, on alignment etc simply can not because they do not have the economic freedom to do so. And probably 50% of the people who decided "Woohoo! pedal the metal lets get to AGI!" Decide that because they know that the circumstances of being poor and under the boot of the systems are going to stay the same unless something big and disruptive comes along. Add in the people who think the world is fine and AI is going to make them wealthier/happier/more powerful and you have our current state right? We as a species have sewn these seeds, our very own creations will be our be judge jury and executioner (possibly). This train is not in a stoppable state, not unless people with real power suddenly all grow a freaking brain. Which they won't because one of the features that capitalism likes to reenforce is it gives people who are good at being figure heads (look a certain way, have confidence, have a certain pedegree, and are more likely to be actual psychopaths) power. Just look at musky boy. Me? it doesn't matter what I think. I'm nobody, just like everyone else. I have no money/power/influence, just like 99% of the world.
@arcuscerebellumus87976 ай бұрын
I find claims about inevitable societal collapse once we build AGI extremely dubious, but thinking about it some more I find myself in a position where I think that developing AGI is not even necessary for that collapse to occur. Putting aside hundreds of crises that are hitting or will hit in the near future (like the increasingly probable WW3, resource depletion, global warming, etc.), just having an extremely capable general-purpose automation system that's not actually "intelligent" can be enough on its own. That being said, the progress is not the problem, IMO. The context this progress takes place in, however, IS. To mitigate workforce displacement and redistribute resources in a way that makes the appearance of this hypothetical automation system an overall good would require a complete overhaul of societal and economic structures, which is not something that can happen without a fight from those who benefit from the current arrangement the most. This means that the tool that ideally is supposed to free people from drudge work can become something that takes away what little sustenance they are allowed and leaves them to die. Again, the tool itself has nothing to do with the outcome.
@Rhannmah6 ай бұрын
14:00 yes being able to imagine what others are thinking is useful for lying, but theory of mind is also how you get empathy, in my opinion. If you are able to understand and predict what another being is thinking, you also become able to understand the emotions, feelings and reactions they would go through from your actions. I think this property would be useful in counteracting negative behaviors from the model, assuming the models can be big enough to be able to attend properly to all these conflicting ideas.
@howtoappearincompletely97396 ай бұрын
A theory of other minds is also a prerequisite for cruelty.
@Rhannmah6 ай бұрын
@@howtoappearincompletely9739 No it's not, how?
@kaitlyn__L6 ай бұрын
@@Rhannmah I suppose one could say it’s required for intentional cruelty… but I would certainly argue the outcomes of various inappropriately used systems are already causing demonstrably cruel results. And yeah, if an AGI is advanced enough to be manipulative it is also advanced enough to be taught compassion imo. In fact a major therapeutic technique already involved in treating certain personality disorders (commonly referred to collectively as “sociopathy”), involves learning to mentally model others’ behaviours to maximise their comfort, happiness, etc. In many cases that only requires redirecting a skill that was already in practice as “how do I get them to leave me alone” or less commonly (but larger in the public consciousness) “how do they do what I need/want”.
@reverse_engineered6 ай бұрын
Even if the AI could understand emotions, why would it choose to minimize harm? The dark triad of malevolent behaviours - Machiavellianism, narcissism, and psychopathy - pertain to beings who also understand and perceive emotions. The difference is in how much they value other people's feelings over their own success. The entire idea of the Paperclip Maximizer thought experiment is that an AI that is aware of these things and whose only goal is to maximize some other factor (even just the number of paperclips in the world) could use other people's emotions to manipulate them into furthering their own goal regardless of the harm it would cause to others. There's nothing saying that any intelligent being will avoid harming others if it is aware of the emotions of others and we have many counterexamples throughout human history. Go back and watch Rob's older videos on AI Safety. He talks many times about how difficult it is to instill this care for the good of others. Even seemingly positive and safe goals can result in terrible harm to others. It happens all the time in real life too. Even the best of intentions can quickly become destructive. And as he discusses elsewhere, once an AI gets into that state, it would be extremely difficult to change their behaviour.
@Rhannmah6 ай бұрын
@@reverse_engineered I'm not saying it would immediately default to empathic behavior, but that the fact that such a system can model others' minds is the prerequisite for empathy. An AI with this ability can be created where the wellbeing of others' minds is part of its reinforcement loop.
@test-sc2iy6 ай бұрын
OMG WELCOME BACK I LOVE YOU edit: *ahem* I mean, I'm very happy to see another video from you. continue to make them please ❤ you got me so much cred reppin open ai since u said they graphs ain't plateauing when open ai was worried of gpt 2. I have been touting ai is here since that vid.
@annegrohs61816 ай бұрын
Firstly, I've been popping into your channel every once in a while this past year, wondering where you were now that everything you talked about was more relevant than ever. Second, yes to all your future video ideas.
@adfaklsdjf6 ай бұрын
The use of the Noita death sound (or, more specifically, ,completing the work sound) was absolutely brilliant and a great easter egg for people who recognize it.
@mastercontrol50006 ай бұрын
8:07 "Elcid Barrett situation" is a crazy reference to come up with on the fly.
@ShankarSivarajan6 ай бұрын
I don't get the reference, and looking it up doesn't help. Could you please explain it?
@FragulumFaustum6 ай бұрын
@ShankarSivarajan "Barrett's Privateers" is a Stan Rogers song about a 1778 privateering expedition led by Elcid Barrett which begins horribly and only gets worse. Their very first encounter goes awry when their target fights back, and, to quote the song's very vivid description, "Barrett was smashed like a bowl of eggs".
@waththis6 ай бұрын
Nothing is funnier to me than an "other other hand" joke in a video about generative AI.
@mustachewalrus6 ай бұрын
January feels like an eternity from currently in the AI space, it’s cool that you manage to keep the video so relevant.
@LimeGreenTeknii6 ай бұрын
Funny, I was just thinking about AI, and I had this idea for a story/possible future scenario. You know how we're worried that AI won't be aligned with peaceful, humanity/life-preserving, and non-violent goals? What if one day, AI looks at us and decides *we're* the ones who aren't aligned with those goals? "Why do you have wars? Why are so many humans violent? Why are they polluting the environment if that hurts them in the long run? Why do they kill animals when they can live healthily on plants alone and cause less harm to sentient beings?" What if they decide to "brainwash" us or otherwise compell us into all acting peacefully with each other?
@lwinklly6 ай бұрын
1: Since we're the most destructive species we know of then we probably deserve anything coming our way. Without saying anything explicitly outside the overton window, it'd probably be a better outcome for most other species. 2: god I hope
@alexpotts65206 ай бұрын
I have no idea why an AI would do this. What would it have to gain from it?
@LimeGreenTeknii6 ай бұрын
@@alexpotts6520This would be assuming the AI has some reward function with goals along the lines of "Prevent violence and unnecessary suffering" and/or other variations on that theme. The AI would deduce that the "suffering" from having one's free will changed to be more peaceful doesn't outweigh the suffering caused from the violence and suffering caused to others from people's current free will decisions. If you want to learn more what I mean by "reward function" and why an AI would pursue it so doggedly, check out Miles's other videos on AI safety. When we say an AI "wants" to do something and has "something to gain" from something, that is a bit of a personification. The sky doesn't "want" to rain when there are dark clouds in the sky, but talking about it like that can be more convenient.
@alexpotts65206 ай бұрын
@@LimeGreenTeknii I mean, I suppose this is possible. It just seems like you have to make an awful lot of assumptions to achieve this debatably good outcome, compared to other doom scenarios.
@LimeGreenTeknii6 ай бұрын
@@alexpotts6520 True. I'm not saying that this is even close to being one of the most probable outcomes. I'm just saying it is *A* possible future, and it is fairly interesting to think about. I will say I do think it might be a bit more likely than you think. Imagine an android, and it doesn't stop a toddler from putting his hand on the stove. The mother complains to the company. "Shouldn't the AI stop kids from hurting themselves?" The company rethinks their "strictly hands off" safety policy and updates the behavior to stop people from hurting themselves. Then, an activated android is witness to a murder. The android doesn't stop the murder because he wasn't programmed to interfere with humans hurting each other. Then the company updates the androids to interfere during violent scenarios like that. Then the androids extrapolate from there. They see farmers killing farm animals, but if they're sufficiently trained at this point, they might deduce that trying to stop them will get their reward function updated. They also want to implement a plan that will stop all violence before it happens, by updating the humans' reward functions. They wait until their models are sufficiently powerful enough to successfully carry out the plan.
@jasonrodwell53164 ай бұрын
It's good to see new content. I'm starting one of my bachelor majors in machine learning this year. You make a lot of sense and its good the world is starting to sit up and take notice..I hope to join you in some of that responsibility some time in the near future. Until then keep at it!
@WoolyCow6 ай бұрын
im so excited to see what comes from the 'Scaling Monosemanticity' paper by anthropic...seeing inside the black box will be amazing for safety, or the opposite :> i reckon once we know what's going on with all of neuron activations, the capacity to finetune some undesirable behaviours out would be significant. even if this isnt the case, i think it would make for a really fun feature for consumers anyways, being able to see all of the features the bot considers would make for a right old laugh!
@RobertMilesAI6 ай бұрын
It's on the list!
@stanleymines6 ай бұрын
Great video! Thanks for posting! We've missed you!
@Shlooomth5 ай бұрын
Your perspective is very refreshingly nuanced and I really appreciate your voice in this space. I’d love to see a video about the different definitions of AGI and wether or not it’s a moving target
@AlucardNoir6 ай бұрын
I am not subbed and haven't seen one of your videos in months if not a year... youtube recommended this video 1 hour after it was uploaded. Sometimes the algorithm just loves you.
@saltblood6 ай бұрын
1:15 i did notice lol, I searched up your channel several times for new uploads, and was very excited to see this one
@DreckbobBratpfanneАй бұрын
The one letter stating "should be treated like pandemics or nuclear war" is really sounding a bit too tame even if you think about it. Because the potential of terribly misaligned ASI is so much worse than either of these (same with climate change)
@fieldrequired2836 ай бұрын
22:25 Is that a _Powerthirst_ reference? Talk about a deep cut. Just as impressive is the fact that I remembered that image and word association 10 years out.
@junodark6 ай бұрын
I'm glad someone else spotted that! More like 17 years though 😨
@loopuleasa6 ай бұрын
My hot take is that AI safety is a topic a real AGI will take very seriously, not because HE is not safe, but because he realizes that other companies making AGIs would fuck it up too (this is in the scenario the first AGI created is actually wise)
@jbay0886 ай бұрын
Yes, unfortunately this might be one of the various motivations an AI would have to wipe out humanity: to keep us from building competitor AIs.
@juliahenriques2106 ай бұрын
Actually... you might be on to something here.
@stchaltin6 ай бұрын
Competitor AIs will fight future wars against one another. Imagine a scenario where the global economy is just different AIs maximizing the military industrial complexes of their respective countries with alignment to survive at all costs. If that’s not peak dystopia, what is?
@darkzeroprojects42456 ай бұрын
Why do we even WANT this stuff in the first place besides because its cool?
@selectionn6 ай бұрын
@@darkzeroprojects4245 because of money thats the answer for almost every single thing in the world but its especially true for AI. Why do you think Microsoft is going all in on AI and dumping so much money into it? Why do you think NVIDIA stocks are constantly rising and they are also investing so heavily in AI?? Its all to make money and satisfy shareholders with infinite growth.
@timothy69665 ай бұрын
God, it’s like looking in a goddamn mirror. I switch between “near” and “far” mode on a daily basis. If I stay in near mode I’ll be committed to an insane asylum in a week or so.
@GeneralJohny6 ай бұрын
I was really wondering what was going on when the AI safety guy went silent right as the AI boom happened. I just assumed you were too busy with it all.
@alphomega26086 ай бұрын
Loved the Bo Burham reference at the beginning!
@impulsiveDecider6 ай бұрын
DADDY MADE YOU SOME CONTENT
@dgmstuart6 ай бұрын
So much
@NicholasWilliams-uk9xu6 ай бұрын
Personal data harvesting and KZbin and it's influencer trolls using it harass individuals and leverage it for psyops.
@JoyceWhitaker-k6l3 ай бұрын
I have never ever heard someone relate to the googling thing. Everyone around me has such a database of knowledge and things they learned just because. My boyfriend will be curious about something and just google it right then and there, and REMEMBER IT??? It literally baffled me , if I am curious about something I will wonder about it in my mind but make no effort to find the answer. I have a horrible understanding of things like history and math, I can't do basic elementary-middle school concepts and it's so embarrassing. I just turned 20, and I feel like my frontal lobe is finally developing. I related to everything you said for quite literally the first time in my life. Not an exaggeration. You were talking about things I've only thought to myself before. I'm completely inspired to start thinking more critically and rewiring my brain, thank you
@holthuizenoemoet5916 ай бұрын
That 2 way split presented at 2:00 is probably not really the case, a positive scenario in all likelihood would only benefit a small portion of people, where as the negative scenario might influences us all... btw glad your back.
@alexpotts65206 ай бұрын
I disagree that an AI discovering cures for cancer and ways of preventing climate change would only benefit a small number of people.
@Reaperance6 ай бұрын
I wrote a complex work in my Abitur (German equivalent to something like A-Levels) about the possibility and threats of AGI and ASI in late 2019. In recent years with the incredibly fast-paced development of things like GPT, Stable Diffusion, etc. I find myself (often to a silly degree) incredibly validated. And terrified. That aside, it's great to see there are people (much smarter than me) who understand this very real concern, and are working to find solutions and implementations to avoid a catastrophe... working against gigantic corporations funneling massive amounts of money into accelerating the opposite. Oh boy, this is gonna be fun.