He says that jobs will be lost and jobs will also be created. But I struggle to see how that is the case. If AGI can do everything a human can do better, safer, faster and cheaper. I struggle to see any avenue for new human jobs. I think its just a way of dampening the fears that we all have.
@wonmoreminute2 ай бұрын
Hugging will be a job that AI creates for us, giving the kind of comfort that only a human can, to people who have lost their job to AI. It won’t pay anything, but hey… you can’t have everything.
@41-Haiku2 ай бұрын
An AI that can do every human job (or even just all human cognitive labor) can do the job of designing more powerful AI systems, and the job of telling AI systems what to do. That is a straightforward loss of control scenario. Humans stop being relevant.
@incyphe2 ай бұрын
The current jobs will be lost, but new type of jobs will be created whatever that may be. In order to have a functioning society, powers that be need to ensure general population is working and consuming. If jobs are destroyed and unemployment shoots up to 20, 30%, there will be chaos. It may even be an end for many things, including OpenAI.
@tracy4192 ай бұрын
@@incyphecan you give some examples of what these new jobs might entail? If AI can out think and robots can outwork people, what kind of jobs do you see opening that they can't do better and cheaper? People say this a lot, but never offer any example or give a reason why jobs will always exist in any kind of number that justifies the kind of economy the world is currently based on.
@kathleenv5102 ай бұрын
@@aidena8381 right, there will be far fewer new jobs vs those lost. Total radical shift with little societal preparation or consent. I'm sure it will all be fine...
@sebastianschaer2 ай бұрын
“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.” Edward O. Wilson (got it from Tristan Harris' AI dilemma talk). Miles also says (kinda sheepishly) that everything is 'technologically possible' .. but the real problem of all of this AI advances will be that it breaks society even faster than we already do without it
@ginogarcia87302 ай бұрын
i learned absolutely nothing from this AGI Readiness guy. I like Geoffrey Hinton at least where he's straight up - we are not ready, he told the British government we need something like UBI but then also was concerned about what about a human's pride in their work - there's lots of problems coming and we're just steamrolling towards AGI
@kathleenv5102 ай бұрын
@@ginogarcia8730 he has just exited Open AI and may be legally limited to what he can say. Instead, watch what he does next.
@ShpanMan2 ай бұрын
Nothing this guy says should be taken seriously. He can't even control his own weight but he thinks he can control gods.
@kathleenv5102 ай бұрын
@@ShpanMan wow, not nice at all. Do you work at an AI lab actively building AGI? I'm guessing no. Sit down.
@ShpanMan2 ай бұрын
@@kathleenv510 Maybe a bit of logic? I'm not the one claiming to control gods and being "ready" for them 😂
@kathleenv5102 ай бұрын
@@ShpanMan I'm pretty sure you are unprepared for what's coming. Suit yourself. Enjoy the ride.
@ElectricEdgeAi2 ай бұрын
So basically, if you're an average person like me, struggling to to pay bills, and don't have 100's of thousands of dollars in the bank... you're screwed. Got it, that's all you had to say.
@dadsonworldwide32382 ай бұрын
Nope you gain the ability to turn on command artificial intellectual property tools beast of burden robot slave horsepower utility cpu serfdom to access any step by step anylitical knowledge . To sub contract out skills and trade, ideas to online coops competing for capital. But we have to flip loose sloppy generalizations evolutionary mythology cosmogony back to strong identifier thermodynamical image of sun / seed = capital. Take e=mc back from on paper in space and place re set that theory frame of reference . Throw away the trash can 4th dimension umbrella term stochastic nonsense around xyz manmade time hierarchy knowledge of good evil equations where we center inheritia value and topographical empty package wrapping around bacon 🥓 of realism.
@dadsonworldwide32382 ай бұрын
In other words platonic wartime posterity macro micro top down rule ends. Stop being deaf dumb and blind about the sea of decay decay right under your nose. Or you train your kids how there ancestors climb out of servitude useing 1 to 1 realism
@kathleenv5102 ай бұрын
It's going to be bumpy even if we eventually get to the utopia destination 😫
@Kurdish202262 ай бұрын
Hopefully it automates most jobs In a Short time so that we can implement UBI
@dadsonworldwide32382 ай бұрын
@kathleenv510 live and learn the hardway every time like it isn't a utopian topographical uber evolutionary anything but bottleneck death n despair. We can go down dig out hidden axioms of complexity put it in our world tech and material sciences unlock 3rd and final frontier underpining it all..
@41-Haiku2 ай бұрын
Modern AI has gone from barely stringing a sentence together to passing Mensa admissions tests in the span of a few years, and almost all the improvement has come just from scale (making the models bigger, feeding them more data, training them for longer). There are no hard barriers to progress in the next few years (no, not even a data wall or model collapse -- those are popular talking points, but are easily addressed with already-existing solutions), and we might be only "one weird trick" away from being able to create a system that is broadly superhuman on long-horizon tasks. The most credible people on AI -- Nobel laureates, Turing award winners, lab leaders, and other researchers and engineers -- say that superhuman AI is possible, is close, is dangerous, and no one has any idea how to control it or design it to be nice to humans. Miles made a great point that holding a strong opinion on whether or not to slow down AI development seems to require having a solid grasp on several complex topics. That's fair enough! As someone who actually _has_ put in the work to understand the technological, sociological, and geopolitical implications of speeding up or slowing down frontier AI development: I am strongly convinced that slamming the breaks and pausing AI _as soon as possible_ is the only reasonable course of action. At least until anyone has a clue how to prevent broadly superhuman AI from doing unbounded harm to humanity, which is a wickedly hard and completely unsolved problem. State and national legislation is an important step, but this is ultimately going to require global treaty. Check out PauseAI to learn more about this issue and what kind of strategy can actually succeed and lead to a pause.
@Franklyfun9352 ай бұрын
Spam on the breaks and let China develop AGI first. Brilliant.
@minimal37342 ай бұрын
As you probably know, opinions on this topic vary widely. There are the so-called ‘doomers’ and the so-called ‘accelerationists’. I assume you would place yourself in the doomers' camp. But whenever we have such a clear division, the truth usually lies somewhere in the middle. We are not even moving at maximum speed, but at a moderate pace, with dedicated teams dealing with the safety issues. It might get bumpy, but it won't be the end of the road.
@andynonomous8558Ай бұрын
They still aren't intelligent enough to ask a question though. Until they can understand when they need more information and actually have a back and forth conversation, it's not going to be anything like AGI.
@Franklyfun935Ай бұрын
@ Claude 3.5 sonnet (new) can do this. Prompt it to “ask relevant questions”
@andynonomous8558Ай бұрын
@@Franklyfun935 That sort of supports my point though. Until I don't have to tell it to ask relevant questions, until it is smart enough to know when it needs to ask questions on its own, its actual intelligence will be limited.
@raphaelmeillat85272 ай бұрын
Is it just me or should the number of times we heard "assuming we're still here, (alive and well)" be slightly concerning to say the least?!!
@kabirkumar58152 ай бұрын
Yes
@ShpanMan2 ай бұрын
Nothing this guy says should be taken seriously. He can't even control his own weight but he thinks he can control gods.
@debugger46932 ай бұрын
I fear that some of the ai safety experts have more interest in creating a bureaucracy to leech from (because that job won't be replaced by ai, right?) than expertise in how the technology works or how to improve it.
@dcrebbin2 ай бұрын
Unsure how abundance will be created through advanced AI/AGI. Companies are mostly for profit entities, they're not all charities wanting to live in a 1960s commune. If they can cut their headcount by 90% to save money, they will. The people who get to experience highly advanced AI first are the ones who are working on it/people with the most money who live in SF. The only way to prevent the collapse of these jobs is for AI to always hallucinate to some degree, or at least to have the perception of that. Trust in these systems will take far far longer to build than the technical progress so it could be decades until fully autonomous workers are adopted in non startup/tech companies.
@minimal37342 ай бұрын
If they can cut their headcount by 90%, products become cheaper by a factor of 99%, considering that labour is the main factor in production costs. This development continues until the products no longer cost anything. This works because AI and robots ‘pay for themselves’.
@Dead_Toothbrush2 ай бұрын
opening script by notebook LM?
@Lolleka2 ай бұрын
Yeah we all know nothing good will come out of all this. Not for the average Joe, at least. Which is most of us.
@ContextFound2 ай бұрын
Bumpersticker: "AI is a real thing."
@zooq-ai2 ай бұрын
"AGi Readiness" was always going to be a grift role filled with hubris about one's ability to predict the future and the power trip that comes along by being anointed the gatekeeper based on hubris.
@David-rn4nf2 ай бұрын
How many different rationales have you come up with to dismiss what various experts in the field of AI have been saying?
@ShpanMan2 ай бұрын
Nothing this guy says should be taken seriously. He can't even control his own weight but he thinks he can control gods.
@thinkingcitizen2 ай бұрын
100%
@SuperBrandong2 ай бұрын
step 1) quit openai for vague reasons (softcore doomporn) step 2) say youre gonna start your own company focused on safety, laughably pretending that you can somehow catch up to the big boys with your hobbled-from-the-ground up product. step 3) profit the ilya method.
@DukeLitoAurelius2 ай бұрын
Underpants Gnomes business plan
@kjetilknyttnev37022 ай бұрын
To all the people slandering Brundage or Ilya for "grifting" or spreading doom -maybe you shouldn't sit in moms basement handing out judgement about things you know nothing about, especially when those people who clearly have information you do not, tell you the house you are sitting in might be on fire. It's not that clever.
@andynonomous8558Ай бұрын
Honestly, getting sick and tired of these big predictions and prognostications. We'll see, until then, it's getting old.
@MichealScott242 ай бұрын
❤
@Rami_Zaki-k2b2 ай бұрын
The most significant issue about AGI - now- is realizing it, not regulating it.
@ToolmakerOneNewsletter2 ай бұрын
It sounds like even the "experts" can't even articulate exactly what AI "safety" is. If the best preparedness strategy is to "understand what it can do", then does ANYONE know how to make it "safe"? My guess is no. Guardrails on AGI's public output has ZERO correlation with how intelligent or capable it actually is. Just like "morals" and "values", my definition of "safe" may be far different from your definition of "safe" if being "safe" also means being less creative to solve the world's problems. We know how incompetent the Human species is at bringing individual and world opinions together unless death is past the point of being immanent. If we get stuck in the morass of expecting government to "safe" us from AI, we will not only lose the race to ASI, we will have left the window of danger open for far longer than it needed to be open. You want to "slow down" a machine that will be far more intelligent than you? Good luck if that is your survival strategy. Your "advanced" ape brains are thinking way too slow.
@TheFeedRocket2 ай бұрын
Agreed, the smartest people in the room are always comparing AI to human intelligence, this is insane and not even close to reality. AI is more alien than human, it's completely different and therefore comparing it to human intelligence is dangerous. I don't know a single human that can answer millions of questions in a second, or have read every single book or text humans have written. Like sure just the other day I was answering thousands of questions while doing some complex math equations, writing songs and painting thousands of pictures in a few seconds. Yeah that's exactly human, wait until it reaches AGI or super intelligence, computers and AI should not be compared to humans. The say it's not self aware even though we have no clear definition of what being self aware even is. It's going to be interesting for sure.
@kabirkumar58152 ай бұрын
A robust, generalizable, scalable, method to make an AI model which will do set [A] of things as much as it can and not do set [B] of things as much as it can, where you can freely change [A] and [B]
@minimal37342 ай бұрын
'...expecting government to "safe" us from AI' - I rather hope that AI will save us from governments.
@PS-eg5rg2 ай бұрын
AGI has already come at least for AGI safety researchers it seems. This guy's answers are no different from what chat gpt will parrot. Zero insights.
@wezmasta2 ай бұрын
"Where to kick them to knock them over" 😂
@shenshaw53452 ай бұрын
probably under nda
@UltraK4202 ай бұрын
But the problem is the people controlling the governing AI safety organization would obviously have agendas. I don't want their opinions and decisions limiting the capabilities of my AI.
@minimal37342 ай бұрын
This is the danger of proceeding too slowly. It could give the established power structures enough time to rig the system in such a way that they can maintain their power.
@UltraK4202 ай бұрын
@@minimal3734 Exactly.
@ronilevarez9012 ай бұрын
@@minimal3734 it is already rigged and should be rigged. You don't want it? Make your own.
@minimal37342 ай бұрын
@@ronilevarez901 I don't think it should. It was promised, that we would not be able to control advanced AI.
@ronilevarez9012 ай бұрын
@@minimal3734 oh, well, current systems are partially locked for safety (either companies' or society's) but future AI won't be controlled. That's like an ant understanding and controlling a human. Won't happen. But it will be fun to watch then try XD
@GiovanneAfonso2 ай бұрын
"probably won't happen again till tomorrow" hahahah this deserves a like and subscribe
@MrStarnerd2 ай бұрын
Why does this podcast sound so much like notebook lm?
@ShpanMan2 ай бұрын
It's crazy this useless guy was making a salary in the hundreds of thousands of dollars.