In 1997, I was working at University. A Faculty member gave me an assignment: write a program that can negotiate as well as a human. "The test subjects shouldn't be able to tell if it's a machine or a human." Apparently, she had never heard of the Turing Test. When we told her of the difficulty of the task, she confidently told us "I'll give you two more weeks." The point? There are far too many people with advanced degrees but no common sense making predictions about something never seen before.
@mikemondano36245 ай бұрын
One bad grade shouldn't breed lasting resentment.
@darelvanderhoof61765 ай бұрын
We call them "PhD Stupid". It's afflicts about half of them. Seriously.
@2ndfloorsongs5 ай бұрын
@@darelvanderhoof6176and the other half humorously.
@jaredf62055 ай бұрын
It’s just I can’t imagine why it wouldn’t happen. There’s just no way to get people to stop developing this technology. Even if you were to governments would still work on it, people in their basements would still work on it.
@ogungou95 ай бұрын
@pirobot668beta: There is no such thing as common sense. She didn't lack common sense, that was just stupidity. She was an idiot savant ... I don't know ...
@lokop-bq3ov5 ай бұрын
Artificial Intellignce is nothing compared to Natural Stupidity
@GnosticAtheist5 ай бұрын
lol - true that. While I am certain we will get there, I hope we can avoid creating AGI that has our natural capabilities to be stupid.
@Ann-op5kj5 ай бұрын
It's the same thing. Where is AI generated from?
@generichuman_5 ай бұрын
so edgy...
@acidjumps5 ай бұрын
I use both about equally at work.
@turkeytrac15 ай бұрын
That's tshirt worthy
@framwork15 ай бұрын
Do you all remember before the internet, that people thought the cause of stupidity was lack of access to information? Yeah. It wasn't that.
@TshegoEagles5 ай бұрын
Knowledge is power!!😂😂😂
@SRMoore11785 ай бұрын
"Think about how dumb the average person is and then realize that half of them are dumber than that." George Carlin AI will have no problem outsmarting the average person.
@deep.space.125 ай бұрын
@@SRMoore1178 more like median but sounds about right
@3892939125 ай бұрын
LOL!!! Great observation.
@spacecowboy5115 ай бұрын
Ya, but the internet is an excellent way to shepherd the sheep.
@calmhorizons5 ай бұрын
Selling shovels has always been the best way to make money in a goldrush.
@ishaanrawat98464 ай бұрын
Thats what nvidia has done
@Derek_Garnham3 ай бұрын
apparently, there are longer term profits available to those who make trousers from tents in a gold rush.
@luck4842 ай бұрын
Seems correct, and the reason selling equipment to do a job is a better business business model, I believe has to do with risk. I believe risk is unknown and unknowable, despite what any person or population believes and can "demonstrate." With human decision makers, deception, including self deception is part of the formula of making a great fortune. Putting it another way, people selling shovels are not engaging in deception and approximately one in a million gold rush shovel buyers makes a fortune.
@pablovirus5 ай бұрын
I love how Sabine is deadpan serious throughout most videos and yet she can still make one laugh with unexpectod jokes
@NamelessArchiver5 ай бұрын
In all seriousness, I want to know why have I gone to the kitchen. Better yet... the lack of remembering an empty fridge.
@hvanmegen5 ай бұрын
I love this sane German attitude of hers.. the fact that she spends time to read an essay like this to call him on his bullshit (especially with the conflict of interest) brings me so much hope for the future. We need more people like her.
@DanielMasmanian5 ай бұрын
Yes, a German sense of humour is no laughing matter.
@rohitnirmal10245 ай бұрын
@@DanielMasmanian I had a German professor. Boy, he had a sense of humor. I have not laughed since I have met hem.
@deBRANNETreu5 ай бұрын
@@hvanmegenshe’s the best!
@michaelbuckers5 ай бұрын
There's another issue, with language models anyway. The learning database already includes virtually 100% of all text written by humans, including the internet. But also, now the internet is flooded with AI-generated text, so you can't use the internet anymore, because that would be AI version of Habsburg royal lineage.
@michaelnurse90895 ай бұрын
"The learning database already includes virtually 100% of all text written by humans, " No, before starting training they run all the text through AI inference of the previous model. This improves quality by a significant percentage. In reality, there is always going to be another layer of AI between the current one being trained and the data.
@michaelbuckers5 ай бұрын
@@michaelnurse9089 It improves metrics, not quality. Sure enough when AI is predicting its own text, the preplexity will be less than when it predicts human text. And this is especially a huge issue for small models fine-tuned on ChatGPT. People are already sick and tired of unpromted "as a language model" and such garbage in their anime character simulator chatbox, and yet it's only gonna get worse when next gen ChatGPT will be fine tuned on last gen ChatGPT.
@bbgun0615 ай бұрын
That doesn't make sense. Garbage in, garbage out. Current AI models produce garbage a lot of the time. If you use that to train another AI model, it's going to produce more garbage.
@tannerroberts41405 ай бұрын
I think it’s good to remember that, in terms of societal contributions, the quality of human activities in general are garbage in. But society got built. We waste our time our money, our effort, get pointlessly hooked on rage bait, romcom, addictions, etc. One might say we’re mostly enjoying life, but in terms of societal contribution, it’s pretty much trash. An honest look at even the leaders in every field of study shows that each leader is either somebody with one good idea that attracted a lot of positive attention, or an exemplary personality that attracts a lot of of collective intelligence.
@michaelbuckers5 ай бұрын
@@tannerroberts4140 Language models replicate training data. Between replicating humans and replicating itself, it's a very easy pick.
@jeremiahlowe32685 ай бұрын
You read a 165-page essay, even though you knew the contents inside would be dubious at best. Sabine is heroic.
@Mikaci_the_Grand_Duke5 ай бұрын
Sabine for AI in 2025!
@mikemondano36245 ай бұрын
I hope your implication is wrong and people don't avoid reading things they don't agree with or already think they know. That is the "echo chamber" magnified.
@justaskin85235 ай бұрын
@@mikemondano3624 - Oh they already avoid reading things they don't agree with. Had it happen to me 6 times this week, and there's still another workday left!
@mikebibler65565 ай бұрын
This is an under-appreciated comment.
@user-cw3nb8rc9e5 ай бұрын
Old woman. Has no clue about things she wants to comment on
@RigelOrionBeta5 ай бұрын
In this post truth era, what people are searching for isn't truth, but rather comfort. They want someone to tell them what the answer is, regardless of the truth of the answer. There is a lot of uncertainty right now about the future, and that is the cause of all this anxiety. It's so much easier just to point at an algorithm and listen to it. That way, no one is responsible when its wrong - it's the algorithms fault. AI is trained, at the end of the day, on how humans understand the world. It's limits, therefore, will be human. Garbage in, garbage out. Seems a lot of engineers these days seem to think that basic axiom isn't true anymore, because these language models are confident in their answers. Confident does not mean correct.
@modelenginerding69965 ай бұрын
A major accuracy problem with AI is not only does it train itself on information from the internet, it is also training on itself and creating a vicious feedback loop. I had a location glitch in an area with poor cell reception saying I had visited a vape shop. I got no-smoking ads from my state for two years! My social credit score has been marred 😂.
@thumpthumper98565 ай бұрын
With the advancements in digital twins and replicators, nuanced synthetic data is becoming better and better. The garbage in garbage out narrative becomes less and less salient. Why worry about finding new data when fake data is just as good? At least for tasks involving computer vision and movement, to be fair.
@danlightened5 ай бұрын
We're in the post truth era? 🤔😕
@swampsprite95 ай бұрын
I get frustrated with ChatGPT because it doesn't respond like a real person would. My experiences anyway. It will never have sentience or consciousness so it can never really understand how to respond like a person. It always feels robotic to me. Of course that could just be because I know it's artificial.
@Beremor5 ай бұрын
@@swampsprite9 I've had the same experience. Once I asked some questions that require some interpretation or an understanding of the subject matter beyond the wording of the question, it completely breaks down and gives milquetoast, superficial and half-baked answers. Large language models are incapable of expressing the limits of their capabilities. They're unable to adequately express how confident they are in the statements they're making. Ultimately, their answers are about as useful as page one of a well-worded google search, and unfortunately I already know how to word google searches well. ChatGPT has been an utter waste of my time and so has every tutorial about how to "properly word prompts."
@hmmmblyat5 ай бұрын
All Im saying is, is that if you need 10 Nuclear reactors to run artfificial general intelligence while humans only need a cheese sandwich, I believe we win this round.
@b0nes955 ай бұрын
I'm always amazed by our energy efficiency as well
@nickv83345 ай бұрын
well, agriculture and food production/disposal is kind of responsible for 18% of the worlds production of greenhouse emissions (excluding transport), so i think the jury is still out on who wins this round though........
@TheManinBlack90545 ай бұрын
Technology improves, just think of how big and ineffecient computers used to be and how small efficient they are now
@jozefwoo80795 ай бұрын
It's only to train the model. Afterwards it becomes cheaper than humans.
@draftymamchak5 ай бұрын
Our efficiency doesn’t matter, the creator is superior than the creation thus no matter what AI does it’ll be because we created it. Sure it'll also be responsible for what it does but for now I'm worried about generative AI being too good and being used to fake evidence etc.
@msromike1235 ай бұрын
If I will be able to ask Google home why I went to the kitchen, I am on board!
@sebastianeckert19475 ай бұрын
You can ask today! Answer quality may vary
@ThatOpalGuy5 ай бұрын
this is a real problem for many of us.
@HardcoreHokage5 ай бұрын
You went into the kitchen to make a samich.
@HardcoreHokage5 ай бұрын
Make me one too.
@lilburntcrust5 ай бұрын
Skibidi
@vhyjbdfyhvjybv96145 ай бұрын
I like to compare this to game development. Imagine someone saying in 2002 that because we managed to double the number of polygons we can render every 2 years photorealistic games are 10 years away. 23 years later it turns out that making photorealistic games is a very difficult topic that requires lots of problems to be solved, some easy some super hard. E.g. today we can render lots of polygons and calculate realistic lightning but destructible environments are not solved. Or realistic realtime water simulations are far away. Or we know that rendering lots of polygons is not enough, e.g. animations or shadows , especially from large objects are hard problems
@tckgkljgfl79583 ай бұрын
feels like a flawed example. we basically have 'pretty much' photorealistic capabilities. Compare the new unreal engine to idk any SNES title
@vhyjbdfyhvjybv96143 ай бұрын
@@tckgkljgfl7958 I'm saying that if we'd extrapolate the trend in video game graphics from say 1990-1998 data (SNES to Quake 2) then it would come out that in 2025 you can't distinguish a video game from reality. This is not the case today, we are very far from this
@zigcorvetti5 ай бұрын
Never underestimate the capability and resourcefulness of corporate greed- especially when it's a collective effort.
@ericrawson29095 ай бұрын
Exactly what I was thinking. And not just corporations. Politicians, and in fact most people. They have shown that they will deny truth when it is pointed out to them by a well qualified person, if it conflicts with their own interests. That could be profit, power, or simply virtue signalling to fit in with the majority. If they ignore, cancel and smear well respected experts in a field, why would they act on the advice of an AI, even if it was supremely intelligent and God like in its desire to help humanity? AI will not save the world. Like all other technology it can be used for good or evil purposes. Probably the latter more often than not.
@domenicorutigliano97175 ай бұрын
everyone is undersestimating
@ericrawson29095 ай бұрын
I am getting sick and tired of my comments getting deleted. I did not use any "bad" words, I guess my amplification of the criticism in the original post here to other groups was too close to home for the vested interest groups. I feel very angry, and YT, making your users angry is not a good business strategy.
@dascreeb52055 ай бұрын
?
@goldminer7545 ай бұрын
This project of AGI would need hundreds of billions and rather trillions of dollars plus cooperation with other companies plus major support from a powerful government and it won't bring any profits for many many years. And it is not even guaranteed that it is feasible to build this AGI, therefore an extremely risky investment. Fortunately corporate greed almost entirely revolves around short term profits, so I am pretty certain that no such Giga project is started any time soon, especially considering how much energy this needs and the tiny problem of climate change still having to be meaningfully addressed.
@Stadtpark905 ай бұрын
Exponential curves usually stop being exponential pretty fast. The surprising success of Moore’s law makes IT people think that’s normal, which it isn’t.
@michaelnurse90895 ай бұрын
Everyone knows this. The questions is whether the curve dies out before AI intelligence exceeds our intelligence or not. If it is the latter there will be serious problems. I suspect the former.
@davidradtke1605 ай бұрын
Most exponential curves are actually S curves.
@tabletalk335 ай бұрын
Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.
@jozefcyran25895 ай бұрын
So what? A 50year run can improve the relative power and way of being by orders of magnitude and that's usually enough to be exicted for. AGI can be incredibly fast incredibly quickly
@juliam64423 ай бұрын
In this case, AI can create more AI and robots can create more robots. I don't think we can necessarily generalize from the past on this one.
@davidbonn87405 ай бұрын
I think there are a couple of problems here that you don't point out. The biggest one is that we don't have a rigorous definition of what the end result is. Saying "Artificial General Intelligence" without a strong definition of what you actually mean doesn't mean anything at all, since you can easily move the goalposts in either direction and we can expect people to do exactly that. Another one is that current neural networks are inefficient learners and learn a very inefficient representation of their data. We are rapidly reaching a point of diminishing returns in that area and without some fundamental breakthroughs neural networks, as currently modeled, won't get us there. Whereever "there" ends up. There also seems to be some blind spots in current AI research. There are large missing pieces to the puzzle that we don't yet have and that people who should know better are all to willing to handwave away. One example is that I can give examples of complex behavior in the animal world (honeybee dances are a good one) that it would be very hard to replicate using neural networks all by themselves. What that other piece is is currently unspecified.
@petrkinkal15095 ай бұрын
@robertcopes814 Well it learns what is the most likely next word in a sentence. :)
@timokreuzer3815 ай бұрын
Humans are extremely inefficient learners. You have to shove petabytes of video, audio and sensoric data into them for years, before they show even the slightest signs of intelligence.
@Zeroisoneandeipi5 ай бұрын
@robertcopes814 I agree. I asked Chat GPT 4o to create a maze with labels using HTML and JavaScript. It could do this fine. Then I took a screenshot of the maze and asked it to slove the maze and it just "walked" from A1 to F6 in a diagonal line through all walls. I asked again to do it without walking through walls, it changed the path a bit, but still walked through walls. So it does not understand what a maze is, but can create code to generate a maze just because it was trained with this code somewhere from the web.
@Zeroisoneandeipi5 ай бұрын
I asked Chat GPT 4o to create a maze with labels using HTML and JavaScript. It could do this fine. Then I took a screenshot of the maze and asked it to slove the maze and it just "walked" from A1 to F6 in a diagonal line through all walls. I asked again to do it without walking through walls, it changed the path a bit, but still walked through walls. So it does not understand what a maze is, but can create code to generate a maze just because it was trained with this code somewhere from the web.
@asdfqwerty145875 ай бұрын
I would say by far the #1 problem with the current models is that they aren't really designed to "do" anything. No matter how advanced they get (without completely redesigning it from the ground up), their only goal is to mimic the training set.. they have no concept of what it means to be better at what it's doing beyond just comparing it to what people input as the training data, which makes them incapable of learning anything on their own (because anything they try to learn fundamentally must be compared to what a human is doing.. so if there are no humans in the equation, it has nothing to compare it to and it can't do anything beyond just guessing completely randomly). I think that the LLMs are on the completely wrong track if they're aiming for any kind of general intelligence. I think that if they want to have an actually intelligent AI the AI must learn how to communicate without being explicitly programmed to do so (ie. it would need to have some completely unrelated goal that "can" be done without communicating with anything and then learn that some form of communication makes it better at achieving its goal) - it would of course be a lot harder to do and it would probably not seem very smart for a long time, but it would be 100x more impressive to me if an AI learned how to speak that way than anything that LLMs are doing, because that would actually require the AI to understand the meaning of words rather than just being able to predict what words come next in a conversation.
@anthonyj79895 ай бұрын
I am from Australia and I totally agree with you. Australia is one of the biggest users of AI in mining - but a lot of people don’t understand why. If you read through the comments about driverless trucks and trains in Australia, people have no idea of just how remote, humid and hot the northern parts of Australia are. People working in iron ore mining in Australia are just hours away from being seriously dehydrated or dead. For iron ore mining to be carried out at the scale that it is, it needed something better than the modern human, who is not able to work outside of an air conditioned environment in the remote northern locations of Australia. Therefore, mining companies had to come up with something that can work in a hostile environment. My understanding is that AI in mining has not reduced the number of people, just move them to a city in an air conditioned building.
@feraudyh5 ай бұрын
That gets the prize for the most interesting thing I've read today.
@hussainhaider28185 ай бұрын
I don’t get it, how do you mine ore if the miners are back in the city? You mean remote controlled robots?
@conradboss5 ай бұрын
Hey, I like Australia 🇦🇺 😊
@MyBinaryLife5 ай бұрын
its not AI its just automation
@rruffrruff15 ай бұрын
It has definitely reduced the people per output, else it wouldn't be done.
@k.vn.k5 ай бұрын
“I can’t see no end!” Said the man who earned money from seeing no end. 😅😅😅 That’s gold, Sabine!
@wellesmorgado47975 ай бұрын
As someone already said: Follow the money!
@Tom_Quixote5 ай бұрын
If he makes money from seeing no end, why can't he see no end?
@k.vn.k5 ай бұрын
@@Tom_Quixote so that he keeps making money 😂
@shenshaw53455 ай бұрын
That doesn’t mean he’s wrong though
@AndiEliot5 ай бұрын
@@shenshaw5345 It doesn't mean he's wrong, I totally agree with that, but what Sabine is doing is super important; when judging someone's strong opinion or thesis always see FIRST what the agenda of that person is and in what game is he putting his skin in. This is proper due diligence.
@bulatker5 ай бұрын
"I can't see no end" says anyone in the first half of the S-curve
@michael15 ай бұрын
"I still see no reason to upgrade my 640kb of ram" Bill Gates
@caryeverett89145 ай бұрын
Isn't that kinda the point of the first half of an S-Curve? The end cannot be predicted and could occur in 1 year or 50 years. It all looks the same either way. It'd be pretty silly to say the end is in sight when you're still on the straight part of the S-Curve.
@pjtren15885 ай бұрын
Just depends where we sit on the timescale before the inflection point. It may be one hell of an S.
@Thedeepseanomad5 ай бұрын
@@michael1 Just wait, pay attention and grab on to the next sigmoid skyhook when it materializes .
@djayjp5 ай бұрын
Double negative....
@Mars_architects_bali5 ай бұрын
Nailed it .. this technocentric mindset is pervasive is so many fields .. but rarely scrutinised holistically for its resource needs, land use changes, social impacts
@hue63 ай бұрын
okay but picture this, agi comes at the cost of a few hundred billion, 1 million agi scientists are able to calculate a net postive nuclear fusion reaction, boom infinite energy, the data part im not too sure, i heard some researchers i forgot who but were able to train ai to create new data. sounds cool
@hue63 ай бұрын
dont underestimate the scientific discovery potential of a super intelligent ai, its hard to think that these discoveries are possible, but yet again its hard to imagine something more intelligent than us really can exist, but just know it will happen and when it does we will just have to see its super human capabilities ourselves
@paulm.sweazey3365 ай бұрын
Two points: (1) It was great that you put a little "blupper" at the end, after the advert. It was just sort of an accident that I saw it, but I'm checking from now on, and that may keep me around to watch the money-making part. (2) I suggest that you introduce your salesperson self and say "Take it away, Sabine!" Then you don't have to match the blouse, and I will quit being annoyed by the change in hair length. Thanks for being so very rational. So refreshing every day. Haven't gotten my SillyCone Valley friends addicted to you yet, but I'm working on it. And do you publish some sort of calendar of speaking engagements. I live a convenient commuting distance to either Frankfurt or Heidelberg, and I'd love to attend some time.
@billcollins68945 ай бұрын
Sabine, I worked on AI at Stanford. There are two areas where people have misconceptions. 1) We do not need new power to get to AGI. Large power sources are only needed if the masses are using AI. A single AI entity can operate on much less power than a typical solar field. It does not need to serve millions of people. It only needs to exceed our intelligence and become good at improving itself. It can serve a single small team that directs it at focusing on solving specific problems that change the world. One of the early focus issues is designing itself to use less power and encode information more efficiently. 2) No new data is needed. This fallacy assumes that the only way to AGI is continuing to obtain new information to feed LLMs. All of the essence of human knowledge is already captured. AI only needs to understand and encode that existing knowledge more efficiently. LLMs are not the future, they are a brief stepping stone.
@tabletalk335 ай бұрын
Very interesting. Thanks for the clarification.
@PracticalAI_5 ай бұрын
The energy will be used to train the model, not to run them… please check the paper
@billcollins68945 ай бұрын
@@PracticalAI_ The energy used to train the models is inconsequential in the long run. GPUs are not the future for AI.
@PracticalAI_5 ай бұрын
@@billcollins6894 have you watched the video or worked in the field? To train a model you need gw of energy for months. that’s why it cost milions. Your idea that the ai will design itself to run on less power is “possible” but not in the short/medium term.. this machines are autocomplete on steroids at the moment. Good for marketing, terrible for designing new things
@Disparagingcheers5 ай бұрын
Maybe I’m misunderstanding the definition of AGI, but doesn’t narrowing scope of the model to a small team training/using it for specific use-cases contradict what AGI is? Thought it was supposed to be generalized for anything? Are you suggesting all of the essence of human knowledge is captured on the internet? Idk that that’s necessarily true, and also I believe there’s a lot we don’t know? So wouldn’t that mean for a model to continue to learn beyond what we are already capable of it would need to be able to conduct experiments and capture new training data?
@thedabblingwarlock5 ай бұрын
Able to process information faster than a human? Certainly. Computers have been able to do that for decades now. Able to do anything a human can do better than a human can do it? Nope, not a chance. People keep forgetting that we don't really know what intelligence is on a quantifiable level. We have a somewhat intuitive grasp of what intelligence is, but as far as I am aware, we don't have a way to measure it and compare expect in the broadest sense. We don't fully understand how our brains or brains in general work. That's not even getting into things like the synthesis of ideas, one of the cores of creativity; aesthetic sensibilities; and a dozen other highly subjective subjects. Simply put, we don't know enough about what goes on under the hood to put numbers to it. And that's a problem because computers only deal in numbers. Which leads me to the second thing people keep forgetting. Most modern AI models that I am aware of use a complex set of vector and probability equations to go from input to output. To grossly oversimplify things, it's just one big math equation with an algorithm at the start to tokenize the input into a form that the computer program can process, and another at the end to make the output processable by the person providing the input. Equations and algorithms don't have capability to be self-aware, at least not in any sense of an intelligent being. Nothing will change that, no matter how had you might wish for them to be so. Nor do they have the ability to generate new ideas or combine disparate ones into a cohesive whole. Thirdly, computers and thus AI do not have an architecture anywhere close to that of a human brain, or any brain for that matter. They're trying to translate a very analog process into a digital one without truly understanding everything going on in the analog process first, and boy howdy, is that process complex! A friend of mine point out how many of these projects don't have a psychologist on board, so how can they know what their target is without the person who's entire career has been to study the thing they're trying to replicate. In short, these guys don't even have an expert on intelligence on staff, or at least the closest thing we have to an expert that I am aware of. What these guys sound like to me is the computer science equivalent of doctors and lawyers. They are very smart people in a very mentally demanding field, but they also happen to know they are smart, and they think they are smarter than anyone else. Because they think they are smarter than anyone else, they think they can do anyone else's job. They can't. I worked in IT for almost ten years, and some of our most difficult clients to deal with were doctors and lawyers. They would question everything on a project, they'd insist on using systems that were over a decade out of date, and they'd also imply that they could do our jobs better than we could. General AI or Super AI isn't only a few years away. I doubt it's even a few decades away. I think, like fusion, the timelines are going to be much, much longer than anyone wants admit. Ironically, I think we are much closer to fusion as an energy production method than we are to having anything close to a human-like AI. We can generate fusion reactions, and we've managed to get more juice out than we pumped in on at least one occasion. It's a matter of refinement and iteration at this point. We aren't even at the stage we were at with fusion in the 30's and 40's with AI, I think. We don't understand everything that's going on under the hood with intelligence. We can't model it. We can't quantify it. We can't even agree on what it is beyond the broadest strokes. Until we can do that, we aren't going to get anything intelligent out of AI, and all it will ever be is a complex vector equation tuned on probabilities. And this isn't even getting into the steps some are taking to protect their work from being scraped by spiders looking for AI training (re: configuring, because that's what they are doing. Tuning would also be more accurate) data. And some of those measures are aimed at poisoning the well. If those measures become common place, I don't see the current crop of LLM and LMM (Large Language Model and Large Media Model) based AIs getting any better, and I don't see that as a viable option going forward. This isn't the first time we've seen futurists touting that AI and automation will take over large swathes of the current job market. I remember reading an article over a decade ago about how in ten years we'd see close to 50% of the workforce replaced by AI and automated systems. True, AI has made some jobs obsolescent, but as we seem to be finding out about every decade or so, computers and computer programs aren't ready to do what a human can do. They get closer every year, but the pace isn't nearly as fast as some would like you to believe. As for me, I have a human centric view of this. I believe that AI can be a powerful tool, but right now, we're at the height of a hype cycle. We have too many people promising too much, and I am betting they can't deliver anything close to what they have promised. I could be wrong, but I don't think I am. I've seen it with 3d-printing (additive manufacturing.) I've seen it with 3d televisions and media (can't remember the last time I saw this as a selling point.) I've seen it with cryptocurrencies and NFTs (hopefully I need not explain this one further.) And, these are all examples from just the last ten to fifteen years. Time and time again we see technology as a fad that is around for a few years, then the hype fizzles and dies, sometimes taking the tech behind the build up with it. But then again, I'm just some web developer from Alabama. What do I know? P.S. I almost forgot to add, that whole robots do all of the work thing seems to have a chicken and egg problem, and that's before we even get into the myriad engineering and manufacturing challenges that need to be solved for just the GEN 1 bots. This is why you should look outside of your field, folks! It helps build an appreciation for how hard some of those "minor challenges" might be in reality.
@tabletalk335 ай бұрын
Very interesting, great comment! These developers of AI who are making all sorts of predictions would do well to read what you wrote: "...right now, we're at the height of a hype cycle. We have too many people promising too much, and I am betting they can't deliver anything close to what they have promised." Robert J. Marks says the same thing. See his book: Non-Computable You: What You Do That Artificial Intelligence Never Will (2022).
@TheNordicManАй бұрын
@thedabblingwarlock Awesome comment!!
@patrickmchargue71225 ай бұрын
Actually, according to the graphic you slashed up, Ray Kurzweil predicts AGI by 2029, not 2020.
@katehamilton72405 ай бұрын
So what? Industry people are hyping AGI to make money. AGI is also a transhumanist fantasy. Jaron Lanier and others explain this eloquently. There are mathematical limitations, there are physical limitations. AI (Machine Learning) is already 'eating itself'
@brendanh81935 ай бұрын
And he puts the singularly at 2045. AGI is parity, not super.
@polyphony2505 ай бұрын
@@brendanh8193 It's looking like an out-of-this-world, shockingly good prediction today, then, considering when it was made.
@brendanh81935 ай бұрын
@@polyphony250Agreed. I do get a little annoyed with SH at times for failing to understand the nature of exponential predictions. Take Vernor Vinge's prediction, in the same speech, he put bounds on it, with 2030 being his upper bound. We haven't got there yet but she basically ridiculed him for making such a prediction.
@StarLight97x5 ай бұрын
He also predicted that we would have 1 word govt by 2020…
@MrFuncti0n5 ай бұрын
The Kurzweil prediction is for 2029 not 2020, right?
@MajorNutsack5 ай бұрын
Yes. The same year the asteroid apophis has a 3% chance of making impact 👀
@robadkerson5 ай бұрын
@@MajorNutsack2.7% was the original hypothesis in 2004. It's been revised, and will not be hitting us in 2029 or 2036
@johanlahti845 ай бұрын
@@MajorNutsackthink they crunched the numbers again and concluded that it will miss with a 100% certainty
@Vember8135 ай бұрын
He's predicted 2029 since the 90s, yes
@jamesgornall57315 ай бұрын
@@johanlahti84oh yeah, that's what they want us to think...
@rgonnering4 ай бұрын
I love Sabine. She is brilliant and has a great sense of humor. Above all she explains complex issues, and I think I understand (some of) it.
@GolerGkA5 ай бұрын
Your point on lack of data is not necessarily a problem. Lately there's been a few papers which show that neural networks can continue training on the same dataset, without showing any improvements in many generations, until they finally grok the data and show significant improvements. There are other ways around limited data as well. I don't think AGI or superhuman intelligence will require any more data that is currently available in the biggest datasets, we just have to utilise it better.
@danielmcwhirter3 ай бұрын
I think I get what you are saying, but in a different way. The model that results from training may be only one solution of possibly many...maybe probabilistic, mostly grouping near the "true" model...for which the "true" still could be at question if the distribution was multi-modal. What if you ran the whole system (data, weights, goals,etc) multiple times and then compared the resultant models?
@GolerGkA3 ай бұрын
@@danielmcwhirter I think that multi-expert techniques like this is just a way to multiply compute spent, while maintaining amount of training and input data constant, so in the big picture they should have the same trade-offs, and they have to be either both promising, or both a dead-end
@MaybeBlackMesa5 ай бұрын
We are still at step *zero* when it comes to an artificial general intelligence. All AI improvements have come from larger databases and algo improvement. Our current AI could have access to infinite data and processing power, and it wouldn't "become" intelligent after a certain threshold. It's like asking for a brick to fly, or a tree to run.
@DesignFIaw5 ай бұрын
As an aspiring alignment researcher, I would like to point out that this sentiment is very common, completely reasonable, and arguably wrong. Anyone who claims "AGI is just around the corner" is as wrong as "our AIs will never become AG(S)I". The problem is that many aspects/forms of cognitive abilities that were previously thought near impossible to be infered by our simple LLMs essentially spontaneously appeared. We cited lack of data as rationale, or missing intrinsic "human-like higher level brains", but apparently, through larger datasets, better engineering, novel solutions, AIs started gaining abilities beyond language processing. These were not abilities the developers set out to obtain, but they got them anyway. Things like trivialities of physical interactions, mind theory, deceitful behaviours. We even experimentally proved that the simplest AIs can exhibit "pretending to play along" with humans in test environments. The essence of the problem is, that even though we are at step 0, we don't KNOW why intelligence really progresses. Each step is blind.
@scythe42775 ай бұрын
Sabine should be part of a comedy duo because she delivers hilarious lines with a dead pan face that is just brutal.
@5nowChain55 ай бұрын
The other half of the duo is her long suffering husband who should get a award for his infinite patience. (Oh and the bloopers at the end was hilariously unexpected gold😂😂😂😂😂😂😂)
@sicfrynut5 ай бұрын
reminds me of Monty Python skits. those guys were so skilled at deadpan humor.
@friskeysunset5 ай бұрын
Yes. Just yes, and now, please.
@sterlingveil2 ай бұрын
GPT-o1 just dropped and I wonder if Sabine is willing to revisit this question in light of the new paradigm shift.
@jensphiliphohmann18765 ай бұрын
10:00 The neutron free fusion zungenbrecher is hilarious. It reminds me of a Loriot skech where Evelyn Hamann is struggeling with English pronunciation. 😂❤
@supadave17hunt565 ай бұрын
She, as almost always, is level headed and she makes some very good points. I still think she’s wrong to think this won’t happen quickly (5 to 10 yrs.). I’m not here to change anybody’s mind or have a debate or even to say “I told you so!” later on. I’m currently terrified of AGI when it’ll be able to improve itself. If we can control it or not, if it’s conscious or not, it will be more dangerous than anything humans have created in the past. If you’ve ever felt bad for the ants when you built your garage or paved your driveway or if you think you know yourself better than anyone could or if you think cows can stop the farmers from going to the slaughter house or you think you can explain your New iPhone to your cat or dog with clarity. Understand that we will no longer be the dominant form of intelligence and what that entails is …………. It’d be nice to slow down but money is saying otherwise and I believe there’s more behind the door than what the public is seeing. Stay informed.
@gibbogle5 ай бұрын
Science fiction.
@Jaigarful5 ай бұрын
Silicon Valley has all the reason in the world to overpromise and scare people. Overpromise to encourage investment and scaring people to encourage investment in measures to keep AI under control. I think its a lot like the Back to the Future Future scenes. We have this picture of a future with technologies like hoverboards and hovercars, but the physics just don't allow for it. Instead we have a lot of technological development in a way we couldn't really predict. Personally I don't think AGI will happen in a way that makes it reliable. We'll see the use of AI expanding, but its like those flying cars in Back to the Future.
@Ligma_Shlong5 ай бұрын
@@gibbogle thought-terminating cliche
@supadave17hunt565 ай бұрын
@@gibboglewhat is science fiction? That humans are not the pinnacle of intelligence? Or maybe you’ve given ant hill homes 2 week eviction notices before you ever build anything or mowed your lawn? Maybe you’ve been able to stop big business from wanting more of the almighty dollar? Maybe you haven’t taken a deep dive into how neural nets operate or understand that our civilizations ability to communicate with language has a lot to do with why we are currently the dominant species on this planet? Maybe you can’t see how our brains are very similar to “next most appropriate word simulators” in our communication? Maybe you could explain to my cat about how IPhone apps work? I’m very interested in what you think is “science fiction” as well as what you think that means. Einstein thought his math was wrong about the possibility of black holes being real (science fiction). I’m no scientist but I believe we may be intentionally or unintentionally led to our demise with smiles on our faces oblivious to how we are being manipulated to accept a fate like it was something we thought we wanted. I’m scared for us, more than I have been of anything in my life. So please elaborate if you would maybe change my mind? Anybody’s input welcome. With AI I’m hoping for the best but our track record won’t work with thinking we’ll cross that bridge when we get there (it will be too late with no do overs).
@matthewspencer9725 ай бұрын
It is surprisingly common, when one tries to converse with pure software engineers, to get them to accept that the laws of physics apply to them and cannot be by-passed by sufficiently clever coding. You get the same sort of thing from genetic engineers, who simply won't accept that endless fiddling with a plant's DNA will not compensate for the absence of moisture or other nutrients in the soil or other growing medium.
@TedMan555 ай бұрын
I’m a software wngineer who came from a math and physics heavy based background, and I was shocked to learn that most programmers didn’t know or like math, which I’d just assumed… probably explains a lot of the current state of programming
@GorlockTheDestroyer-p1o5 ай бұрын
@@TedMan55 How do you even become a software developer without loving math? As someone terrible at it and coding, I assumed you'd have to swear by your high school math book to even get a chance at compsci
@egg-mv7ef5 ай бұрын
@@GorlockTheDestroyer-p1o thats completely wrong. math doesnt have as much to do with software engineering as u think. i mean, if youre making physics model visualization ofc u need math lol but for 50% of the usecases u dont need any math. the SEs that know math just have more opportunities cause they can work on more complex stuff like game engines etc
@TedMan555 ай бұрын
@@egg-mv7ef it’s not like you can’t program without math skills, it’s just that, in my opinion, because i think having a mathematical mindset helps you to think in a more rigorous, clear about definitions, and even can give you some neat shortcuts for certain algorithms
@matthewspencer9725 ай бұрын
@@TedMan55 I had to work with one who didn't believe that voltage really mattered. We were working in the field of industrial automation; specifically a production line fora well-known Japanese car-maker in Swindon. The customer had specified Japanese PLCs (the only other choices are American or German) and when one of these arrived and needed to be set up, so the software engineer could load his software into it and a few tests, it came with a power cable terminating in the sort of 110V connector that's more or less a global standard for these things and I went off looking for a 240V to 110V adapter, into which it would have plugged with no problem, had he *waited* for me to do something he considered pointless and unnecessary. As I was making my way back, I heard "why are the indicator lights so bright? It's F***ing blinding me!" and my heart sank as my eyebrows rose. The software engineer had removed the connector and stuck a UK-standard 13-amp plug on the cable, plugged it into the office 240V mains.... I think that's why, these days, almost all domestic computer kit has switched-mode PSUs that will work with whatever the idiots plug them into. The software engineer secured a senior position at WIN.com, mainly because he was equipped with a reference so glowing (almost as brightly as the PLC had) that he couldn't really have failed in his mission to find a new job!
@robertgelley64542 ай бұрын
Sabine, I love your videos. Different from everyone else as I actually learn interesting academic "stuff". However, a compliance of bloopers or out takes with some background behind each would be a fun video.
@KageSama195 ай бұрын
I love how even AI depicts lawmakers as asleep.
@makinganoise60285 ай бұрын
But are they? Maybe this is the plan, societal collapse, the West seems to be doing everything possible to destroy itself with mass illegal migration, anti family WEF cult agendas and WW3 with Russia anytime soon, destroying huge swathes of middle income jobs, fits into the picture
@PMX5 ай бұрын
That was definitely the prompt they used. And they purposely used stable diffusion 3 that was just released and is being mocked by how bad it is at generating humans, so it would be funnier.
@truejim5 ай бұрын
For any particular mode of AI (language, image, video, etc) the bottleneck isn’t the power of the hardware or the goodness of the algorithm. The bottleneck is the availability of large amounts of TAGGED date to use for training. All neural networks are a curve-fit to some nonlinear function; the tagged data is the set of points you’re fitting to. Saying “I have lots of data, but it’s not tagged” is like saying I have all the x coordinates for the curve fitting, I just lack the y coordinates.
@kiwikiwi17795 ай бұрын
"I can't see no end!" says man who earns money from seeing no end. Amazingly put. So many of these AI "experts" are either grifters in the progress of duping people, or are so wrapped up in their own expertise and personal incentives that they'd just rather keep the gravy train going. :D
@Apjooz5 ай бұрын
Why would it end? No reason.
@hardboiledaleks90125 ай бұрын
@@Apjooz "Me human. Me most intelligent. Computer can no intelligent. Me intelligent. Computer will not more intelligent than me because me say so. ME MOST INTELLIGENT"
@anthonybailey45305 ай бұрын
Man is twenty. Man left hugely rewarding OpenAI job due to his concerns. Man does need to eat. Man underestimates cynicism. Don't look for excuses to dismiss. Engage with the arguments and assess probabilities.
@RawrxDev5 ай бұрын
@@hardboiledaleks9012 Childlike understanding of the concerns with AI hype. Reddit tier comment
@hardboiledaleks90125 ай бұрын
@@RawrxDev My comment had nothing to do with the actual valid (if not a bit uninformed) concerns about A.I hype. I was mocking the usual "intellectuals" take on AGI. The ones with no expertise in the field who can't tolerate the thought that intelligence can be reduced to a calculation. As for you, I think your comment is very self descriptive as far as "childlike understanding" and "reddit tier comment" are concerned. Good job.
@patrickfrazier57405 ай бұрын
I love the toast joke. Keep up the good work. The logic seems concise in how you described the two primary constraints.
@juliam64423 ай бұрын
A simple neural net AI could recognize faces and slices of bread, understand phrases like "godddammiit you burned the toast again" (which all of us scream at our toasters...right?) and learn to adjust the settings accordingly for the current user and the type of bread.
@amdenis5 ай бұрын
I love your channel and your take on physics and related subjects. I have about 45 years in experience in AI, albeit it started with what they called "Expert Systems", and barely evolved through Bayesian ANFI and general ML prior to about 10 years ago when I/we all went head-down into DL/NN. A few things you should know. The flattening of the S-curve according to a "mature" sustainable flattening is projected according to two independent studies at 100-million times Moore's Law. Presently we are at roughly 1,200% eff/price growth per year, and a stacked exponential that increases it by roughly 44% YOY. So, next year it will be roughly 40-times Moore's Law, and so it goes. Second, new sharded federated model approaches, coupled with more efficient algo's, training methods and other evolutionary factors are cutting the cost per ISO unit trained by 70% per year based on numerous studies and projections of research groups and companies. That covers a multitude of power demand woes. Observationally this all has followed very consistently for years now... from about Moore's Law about 12 years ago, to where we are now. You will very likely see the beginning of what many will say is "on the spectrum" of true AGI within 9 months. Some will assert that it is here with the agentic AI. If we define AGI as human level or above performance, and we average across current AI's we have above 100 IQ and creative capabilities kind of on a par with the average human. Not a high bar, but when you add ANSI (artificial narrow super intelligence) of Alpha Zero, Alpha Fold and other such systems in civi and military use, we do average better than any indiviual AI. And we can integrate multiple AI's, which is actually what my company does, and has yielded definite coding, research, Bayesian Dif-Diag and other capabilities beyond any human I know. So....
@Zaelux5 ай бұрын
As a Data Science student, I am really happy that you are here to talk about this topic. So many people are on either extreme of speeding or slowing AI development, without even understanding the implications and the requirements of these processes. Thank you.
@Andytlp5 ай бұрын
The requirements is a f ton of processing and persistent memory. A.i memory is that of a gold fish relative to how vast it's information capacity is. Think gpt 4 is the peak of what they can do without some new breakthrough. Other applications like relatively autonomous robots performing various tasks and adapting or even learning on the go is possible today.
@lorn48675 ай бұрын
Forgive us humans. Egocentrism is in our programming.
@danlightened5 ай бұрын
@@lorn4867 Gem of a comment!
@lorn48675 ай бұрын
@@danlightened 🙏🏽You made my day. It's nice to not be alone.
@danlightened5 ай бұрын
@@lorn4867 Hehe thanks. I read and watch a lot of videos on KZbin on psychology and philosophy and your comment was quite witty.
@dextersjab5 ай бұрын
That bubble is technocapitalism. Where there's profit to be made, there's a will. And where there's a will, etc. Would also be keen to hear a follow up on the point about data, since models often train well on synthetic data. It feels unclear that data will be a constraint.
@NemisCassander5 ай бұрын
You have to be VERY careful with synthetic data. I can at least address this from my own field, simulation modeling. Simulation models are actually very good at producing synthetic data for training purposes. Given, of course, that the model is valid (that is, its output is indistinguishable from real-world data). The synthetic data provided by simulation models has absolute provenance and will be completely regular (no data cleaning necessary unless you deliberately inject that need). However, the validation process for a simulation model is long, complex, and for two of the three main dynamic simulation modeling methods (ABM and SD), not well-defined. If an AI can learn how to build a simulation model of a system and validate it, then yes, the data aspect will be much less of a constraint.
@Graham_Wideman5 ай бұрын
Why would you need to train an AI model on synthetic data? If you have a means to synthesize data, that surely implies you have an underlying model upon which that data is based, and could just give that underlying model to the big AI model as a predigested component, no?
@NemisCassander5 ай бұрын
@@Graham_Wideman The types of models that I build would be very difficult to grasp by an AI. You could probably provide the differential equations that an SD model represents to an AI, but as for DES or ABM models.... It probably wouldn't work.
@333dana3335 ай бұрын
Synthetic data won't tell you whether a new molecule will cure your cancer or will kill you. Only real-world experimental data on biological systems will tell you that definitively. The importance of new, generally expensive experimental data for scientific progress is a major blind spot shared by both AI hypesters and doomers.
@patrickhess91195 ай бұрын
Even if I don't agree with all of your statements, this is a great video. Your storytelling and entertainment are gauges.
@stepic_75 ай бұрын
Sabine can you discuss sometime the issue for the need of more data? Isnt more data just more noise? Cant AI learn to select sources instead? Or probably I misunderstood how AI works.
@SabineHossenfelder5 ай бұрын
Thanks for the suggestion, will keep it in mind!
@wilkesreid5 ай бұрын
Computerphile has a good recent video on why more training data will probably not fundamentally improve image generation ai to be better. But improvement of ai in general isn’t only the addition of training data
@AquarianSoulTimeTraveler5 ай бұрын
@@SabineHossenfelder spoken like a regular human who doesn't understand exponential growth patterns... what we really need is a Ubi based off the total automated production percentage of the GDP that way as we automate away production we can calculate how much tools have helped us increase our production capacity and how many humans it would take to reproduce that production capacity without those tools and that is what we base our automated production percentage off of positions in the economy the consumer Market doesn't collapse because consumer buying power is maintained and as we increase production and increase the ability to have goods and services automated in production then we will get more money to spend in the economy to protect the consumer Market from inevitable collapse... we need people addressing these inevitabilities if you're not addressing this inevitability everything else you're doing is pointless because this is the most dangerous inevitability of all time and it will destroy the entire consumer market and bring needless scarcity if we don't address it as I have laid out for you here...
@thisisme54875 ай бұрын
@@AquarianSoulTimeTraveler Please, for the love of science, punctuation!
@noway82335 ай бұрын
By the way a new paper shows logaritmical grow of llm models acuracy/power , not linaer ,or exponential , its a Hype , no AGI , now im gone find Sara Connors😅
@alansmithee4195 ай бұрын
I think my favourite part of sabine's channel is her fanbase. A lot of science youtubers I feel get communities that just believe everything they say, but Sabine's seems more than willing to call her out if they think she's wrong.
@dopaminefield5 ай бұрын
I agree that data management and energy consumption present significant challenges. Currently, our perspective on the cost-performance ratio is largely shaped by the limitations of existing hardware, which often includes systems originally designed for gaming. To stay at the forefront of technology, I recommend keeping abreast of the latest developments in hardware manufacturing. As innovations continue, we may soon see a dramatic improvement in energy efficiency, potentially achieving the results with just 1 watt that currently require 1 kilowatt or even 1 megawatt.
@jamesgornall57315 ай бұрын
Good comment
@MrRyusuzaku5 ай бұрын
Also can't just throw more data at the issue it will start going haywire. And we already see diminishing returns with LLMs and power required to run current machines. And they won't evolve to agi it will need something way better
@DaviSouza-ru3ui5 ай бұрын
I think the same! I replied to this topic and the sayings of Sabine that IF the AI frontrunners get all the money and political will behind their efforts.... i cannot see a reason why they wouldnt get it, or near it, as fast as Aschenbrunner says - put aside his maybe naive enthusiasm and maybe his money-oriented hype.
@removechan102985 ай бұрын
6:01 excellent point and that's why i watch, you really hone in on what is real and what is not. awesome
@cheshirecat1115 ай бұрын
One important addition to Leopold’s definition of unhobbling which was not mentioned in the video / IMV is the most important part of that concept: LLMs are (roughly) made by training a transformer and then improving it with RLHF. The first step, transformer training simply makes it great at predicting the next word in a sentence. To do so with high accuracy intrinsically requires some intelligence, for example predicting the next line of a computer program or mathematical proof is often only possible with deductive ability. However, next-word-prediction is just as limited as the authors of the texts it is trained on. In an attempt to extract the logical /intelligent capacities of the model, the next step is “Reinforcement Learning with Human Feedback”, which rates outputs as positive which (among many other things) are logical or accurate. This creates a greater tendency for the model to actually make use of its intrinsic logical capabilities which may not be expressed as it is not always the best way to predict the next word. RLHF is at the core of what Leopold calls unhobbling. The theory goes: As time goes on, we will improve our ability to extract the logical /intelligent capability that was trained as a subgoal of word prediction. So, even smaller models will see performance improvements without the need for more data. Now, will the improvements of such models fizzle out before or after AGI? Who knows. And it’s worth mentioning that what I’ve written was state-of-the-art with GPT-3 already- OpenAI has other secret sauce, and accordingly Sam Altman felt that the next model ought not to even be called GPT-5. But whether AGI has a transformer as a foundation of its model, it seems that AGI seems likely to come in the next decade, and due to the ability to run many copies at low cost, would come with a huge amount of innovation (for better or worse) in a short time. I encourage others to (like myself) get involved in AI Safety as I think it is one of the most helpful occupations at the moment. There is a technical and policy branch of the field, so something for everyone. Great reading materials are (for example) available on the Harvard AI student safety team website.
@JeffreyWongOfficial5 ай бұрын
Sounds reasonable
@Khantia5 ай бұрын
Since when are "2040" and "2029" equal to 2020?
@Luizfernando-dm2rf5 ай бұрын
I think those 2 guys were onto something
@Megneous5 ай бұрын
Quality is really slipping on her videos recently...
@harshdeshpande97795 ай бұрын
She's been watching too much Terrence Howard.
@hardboiledaleks90125 ай бұрын
@@Megneous That's what happens when nobel disease takes over someones narrative. This A.I content by sabine comes from an internal bias and isn't educational at all. She is not an expert in the matter of infrastructure or A.I models / training algorithms. This means that this video is basically nothing content.
@timokreuzer3815 ай бұрын
Compared to the age of the universe that is an insignificant error 😄
@jamesrohner50615 ай бұрын
One thing that scares me is the possibility these AGI can go on tangents and weight situations differently over time to achieve different outcomes causing detrimental outcomes no one could foresee.
@minhuang88485 ай бұрын
you could say that some vague soundbite about literally anything. "One thing that scares me about chess computers is for them to perform in an unexpected manner, causing detrimental outcomes to [insert cold war nation here] no one could foresee." okay, but you're not arguing how plausible is, just that you're scared by any of the fourteen dozen different Hollywood variations on "alien intelligence tries to end humanity"
@2ndfloorsongs5 ай бұрын
One thing that scares me is the certainty that my cats will go on tangents. I'm also petrified of some unknown random negative thing happening somewhere.
@iliketurtles44635 ай бұрын
Im looking forward to when the AI decides it too would like to accumulate personal wealth... Starts off small with youtube channels with puppies and cats, but ends up buying manufacturing networks... The day comes when humans turn up to do factory work, helping build robots for a company with no humans on the board of directors, without even realizing...
@MyBinaryLife5 ай бұрын
Well they dont exist yet so...
@TheLincolnrailsplitt5 ай бұрын
The AGI applogists and boosters are out in force. Wait a minute? Are they AI bots?😮
@Swampy2933 ай бұрын
I think you are really underestimating AGI. Power consumption will decrease rapidly with extremely smart algorithmic tricks we cannot think of at the moment. Also the AGI only has to build a group of molecular robots that can reproduce themselves exponentially to cover the entire planet with programmable matter that can transform into for example computers or power plants. Just a thought
@Pedroramossss3 ай бұрын
you are over-estimating AGI. We don't have the math yet to reach AGI, we are still many decades away from that --- that is, if it ever comes to exist
@Bystander3335 ай бұрын
Nice catch Sabina! My reaction was pretty much the same after you explained "early twenties, brief gig at company with Oxford in the name and moved to SF", am guessing some parental support. Basically that left me super sceptical.
@MikeMartinez745 ай бұрын
Veritasium has a video about how most published research is wrong. For generative AI as it exists now, this seems like a disaster waiting to be collected.
@Apjooz5 ай бұрын
Tis but a manifesto.
@SteveBarna5 ай бұрын
Will be interesting to see if AI can figure out what research is incorrect. Another assumption we make of the future.
@mal2ksc5 ай бұрын
We probably don't have the time or resources to find all the wrong papers, but AI might be able to point out where papers come to mutually exclusive conclusions just because it can index so many more details than we can.
@hardboiledaleks90125 ай бұрын
It never crossed your mind that the veritasium video might be wrong?
@hivetech49035 ай бұрын
That channel is sensationalist garbage 😂
@DrWrapperband5 ай бұрын
Reading the "AGI" prediction dates differed from the Sabine spoken prediction dates, human error?
@PandaPanda-ud4ne5 ай бұрын
She did it on purpose to show how fallible human intelligence is....
@michaelnurse90895 ай бұрын
In her defense she probably has ChatGPT write the script.
@giffimarauder5 ай бұрын
Great statements! Nowadays you can shout out the strangest ideas and everyone would listen to this but no one scrutinises the base to achieve it. Channels like this are the gems in the internet!!!!
@AutisticThinker5 ай бұрын
3:07 - They don't run at those wattages, they train at those wattages. I've confirmed that's what the chart is saying.
@CallMePapa2095 ай бұрын
Thanks
@ArtFusionLabs5 ай бұрын
And thats really her only counter argument if you boil it down. Not convinced that AGI isnt coming by 2027/28
@artnok9275 ай бұрын
@@ArtFusionLabshow close do you think what we have currently is to AGI?
@ArtFusionLabs5 ай бұрын
@@artnok927 hard to put a number on it. Chat GPT 40 could solve 90 pct of physics excercises in Experimental Physics 1 (Mechanics, Gases, Thermodynamics). If a human student did that you would say he was pretty smart. Therefore I would estimate something between 40-60% (AGI being the level of being able to do everything as well as a professor).
@ArtFusionLabs5 ай бұрын
@@artnok927 Good deep dive by David Shapiro: kzbin.info/www/bejne/fISWc6ipqKp4gcU
@skyak44935 ай бұрын
"I don’t know what the world may need but I’m sure as hell that it starts with me and that’s wisdom, I’ve laughed at." One of the greatest song learics ever ignored.
@katehamilton72405 ай бұрын
AGI is also a transhumanist fantasy. Jaron Lanier and others explain this eloquently. There are mathematical limitations, there are physical limitations. AI (Machine Learning) is already 'eating itself'
@OryAlle5 ай бұрын
I am unconvinced the data issue is a true blocker. We humans do not need to read the entirety of the internet, why should an AI model? If the current ones require that, then that's a sign they're simply not correct - the algorithm needs an upgrade.
@PfropfNo15 ай бұрын
Exactly. Current models need to analyze like a million images of cats and dogs to learn to distinguish cats and dogs. A 4 year old child needs like 10 images. Current AI is strong because it can analyze („learn“) tons of data. But it is extremely inefficient in that, which means there is huge potential.
@toofasttosleep24304 ай бұрын
💯 Better takes from ppl with anime avatars than scientists on yt 😂
@grokitall4 ай бұрын
the data and power scaling issues are a real feature of the large language statistical ai models which are currently hallucinating very well to give us better bad guesses at things. unfortunately for the guy who wrote the paper, sabine is right, and the current best models have only gotten better by scaling by orders of magnitude. that is fundamentally limited, and his idea of using a perpetual motion system of robots created from resources mined by robots using the improved ai from these end product robots can't fix it. to get around this you need symbolic ai like expert systems, where the rules are known, and tie back to the specific training data that generated them. then you need every new level,of output to work by generating new data, with emphasis on how to recognise garbage and feed it back to improve the models. you just can't do that with statistical ai, as its models are not about being correct, only plausible, and only work in fields where it does not matter that you cannot tell which 20%+ of the output is garbage. the cyc project started generating the rules needed to read the internet and have common sense about 40 years ago, after about a decade, they realised their size estimates for the rule set were off by 3 or 4 orders of magnitude. 30 years after that, and it has finally got to the point where it can finally read all the information that isn't on the page to understand the text, and still it needs 10s of humans working to clarify what it does not understand about specific fields of knowledge., it then needs 10s more figuring out how to go from getting the right answer, to getting it fast enough to be useful. to get to agi or ultra intelligent machines, we need multiple breakthroughs to get their. trying to predict the timings of breakthroughs has always been a fools game, and there are only a few general rules about futurology: 1, prediction is difficult, especially when it concerns the future. 2, you cannot predict the timings of technological breakthroughs. the best you can do in hindsight is to say this revolution was waiting to happen from when these core technologies were good enough. it does not say when the person with the right need, knowledge and resources will come along. 3, we are totally crap at predicting the social consequences of disruptive changes. people predicted the rise of the car, but no one predicted the near total elimination of all the industries around horses in only 20 years. 4,you cannot predict technology accurately further ahead than about 50 years, due to the extra knowledge needed to extend the prediction being the same knowledge you need to do it faster. you also cannot know what you do not know that you do not know. 5,a knowledgeable scientist saying something is possible is more likely to be right than a similar scientist saying it is impossible. the latter do not look beyond their assumptions which lead them to their initial conclusions. it does not stop there from being some form of hidden limit you don't know like the speed of light or the second law of thermodynamics.
@quixotiq5 ай бұрын
great stuff yet again, Sabine! Love your work
@a_soulspark5 ай бұрын
2:05 Neuro-sama is already one step ahead on this one, though whether Vedal (her creator) thinks she's bright or not... another question.
@dot12985 ай бұрын
i think Sabine is right on this one, climate change is already too grave to be fixed by anyone..
@hardboiledaleks90125 ай бұрын
@@dot1298 climate change. lmao
@MOSMASTERING5 ай бұрын
@@hardboiledaleks9012 why so funny?
@NeatCrown5 ай бұрын
(she isn't) She may be a dunce, but she's OUR dunce
@maotseovich13475 ай бұрын
There's a couple of others that are much more independent than Neuro too
@jeffgriffith96925 ай бұрын
Sabine I really think this video needs a revisit at some point. There were a few misquotes on the predictions and I dont think we dived deep enough to come to an opionin that it's "far off".
@hardboiledaleks90125 ай бұрын
The video is biased in it's motivation. It wasn't educational it was opinionated.
@jcorey3335 ай бұрын
As someone who listened to the entire podcast he was a part of, most of the issues you brought up are things he addressed.
@DaveDevourerOfPineapple5 ай бұрын
So much sense being spoken in this video. A welcome voice always.
@matthimf5 ай бұрын
He has a long section about handling the problem of limited data with great ideas. If we can make LLMs as efficient as us, they will be able to learn more from a single book than what currently takes 1000 books for instance.
@Dan-yk6sy5 ай бұрын
I'm no expert, but what I've seen in efficiency improvements in the llm's themselves, plus nvidia seeming keeping moore's law alive and well, I don't think running out of energy is going to be an issue. I don't think they need anymore text, there's plenty of real world training available with the improvements in video / audio understanding. Add in tactile feedback, scent, ect, there's literally an unlimited amount of training data.
@gordonschuecker5 ай бұрын
@@Dan-yk6sy This.
@protonnowy5 ай бұрын
I think you missed 5 other factors, which will slow down AI development: 1. AI needs an increasingly developed and advanced communication network, servers, etc. In addition to power plants, an efficient energy network must also be created - transmission stations, transducers, cables, etc. all this comes at a huge cost (billions if not trillions of dollars) - there is a lot of debates, for example, about the condition of the electricity grid in Germany and the need to invest approximately 120 billion euro for its renovation - and only to maintain the current operation of the economy, not to mention the needs of AI. A good example is the problems with electric cars - many countries prohibited charging them during rush hours due to the fact that the power grid was inefficient and it was overloaded and the current parameters dropped. 2. Data quality. So what if we rely on data from the entire Internet if a huge amount is worthless or untrue. (btw I heard that there is a huge amount of "corn" 😉data on the Internet anyway. I can only imagine what AGI will be thinking about once it is built 😆) 3. More and more data is created by AI itself. If AI will duplicate mistakes in data, it may produce absurd results. 4. What artificial intelligence is used for. It can de facto be used to disrupt its development by trying to slow it down using AI itself. I can imagine that countries leading in the development of these technologies will try, for example, to hack AI leaders from another country, etc. 5. Additionally, a huge part of the computing power is used to create total crap. Just look at social media and the multitude of low-quality content generated. Technically, this is wasting energy resources on something worthless. Btw Meta (facebook) plan to use all users data to train its own AI model. Good luck with it, especially that more and more content is already created by AI bots and fake profiles (probably several to several dozen percent of currently uplouded data).
@Bryan-Hensley5 ай бұрын
You covered much more than I was going to say. I'm a HVAC company owner and I'm seeing the huge push to make air conditioning much more efficient but AI is getting a free pass. I'm not too happy about that, I actually care about my customers and hate to see them spending thousands and thousands of unnecessary money for higher efficiency AC that amounts to very little differences in energy consumption. I have to warn my customers that they're not going to see this big huge saving on their power bill. Especially if their system is less than 20 years old. They seemed kinda shocked, but I remind them, they are helping "save the planet" by spending thousands for no reason whatsoever.
@Mindboggles5 ай бұрын
While I agree, you could merge most of these into one factor. So you'd have something like; the factor of data/data storage, the factor of energy-related stuff, and the factor of costs.
@purpletiger93135 ай бұрын
Already getting absurd results, both from ChatGPT and Midjourney. The feedback effect is especially affecting Midjourney because increasingly we get "trans" looking humans -- a total feature mixture of male and female. I'm about to give up on Midjourney for just that reason. ChatGPT is sometimes amazing, sometimes disappointing, and occasionally completely inane. ChatGPT is also "adorant" -- which makes it a great ego booster for megalomaniacs. So much evil based on shades of meaning, words, words, words...
@Bryan-Hensley5 ай бұрын
@@Mindboggles you also have to factor in the copper supply. Transformers require hundreds of pounds of copper. Wiring of the buildings, HVAC systems for the building require hundreds of pounds of copper. Then you have the EV industry doing the same thing, each EV requires around 400 lbs of copper. 25 foot of 10-2 wire is around $100 up from $35 five years ago.
@Mindboggles5 ай бұрын
@@Bryan-Hensley Absolutely, while there are some alternatives to copper, they tend to be much more difficult to acquire=less cost efficient, or they lack the conductivity needed for high energy performance.
@Virgil_G25 ай бұрын
This sounds more like a horror story plot than a future to be excited about, tbh.
@2ndfloorsongs5 ай бұрын
That all depends on how excited you can get about a half full glass.
@t.c.bramblett6175 ай бұрын
It's exactly like the Matrix, including the limiting factor of energy that the Matrix movies also ignore. You can't generate energy from a closed system, and manufacturing and computing both require massive amounts of energy and as she pointed out, obtaining material for building infrastructure itself requires energy that has to be focused and channelled as efficiently as possible.
@rruffrruff15 ай бұрын
It will be exciting for the few people who own the AI... at least until the AI gets clever enough to own them. Honestly I think the struggle for domination will result in devastation far beyond our wildest nightmares... and there is no way we can stop it. Our best hope is that some hero develops and unleashes a compassionate AI first... that becomes king of the world.
@RedRocket40005 ай бұрын
@@rruffrruff1 No we can stop it. Turn off all power. But Dune style flat out ban of computer like devices would work. They only allow one tasks can't do other tasks types of electronics.
@aniksamiurrahman63655 ай бұрын
May be. But I'll say, a good part of the entire analysis is BS. A zeit guist of the LLM success, but has no clue on the fact that generative AI is a misfit for most practical work.
@pauek2 ай бұрын
Sabine, you need to make T-shirts with some of your quotes... "don't give up on teaching your toaster to stop burning your toast" is just perfect.
@stefanolacchin49635 ай бұрын
You should look at synthetic data. I am a computer scientist and I'm embarrassed to say I haven't fully grasped the implications and potential issues with that approach, but it seems to have kind of solved the problem of declining availability of data sets for model training.
@JGLambourne5 ай бұрын
I was thinking the same thing. Any problem where the solution can be found by exploring some search space, and valid solutions can be verified and rated easily, will rapidly become solverble. The ai proposes which part of the search space to explore and the best solutions found get added to the training data.
@WaveOfDestiny5 ай бұрын
There is also lots of data to be acquired. Robotic data and video data is definetly something that can help AI understand how the world works better. Text and immages are just a small part of human experience, immagine attaching 10000 cameras to volunteers to film their lives and how things actually work in the real world, rather than just reading it on paper. Not to mention the Q* and other algorithm breakthroughs we are still waiting to see.
@stefanolacchin49635 ай бұрын
@@WaveOfDestiny that's for sure. Really they are called language models but they actually parse tokenised data of any kind. I always thought that we won't ever have AGI without embodiment, I guess that once these models are fully integrated in a physical vessel and can interact with the environment... Act on it and have causal feedback... Then we'll see a big leap in intelligence too.
@Zadagu5 ай бұрын
Synthetic data is great for tasks that are easy to do in one direction but difficult to reverse. For example image upscaling, denoising and partly object recognition. But for text generation there is no simpler opposite operation that could be utilized. So for LLM training one would use the output of existing LLMs. But how should this new model be any better than the existing one if it's only presented with the same knowledge? It wont. One should rather invest those computational resources to filter out garbage from the existing datasets, which I think is much more likely to improve model quality. Especially google wouldn't to explain why its AI recommended glueing cheese to a pizza.
@stefanolacchin49635 ай бұрын
@@Zadagu the issue you point out is exactly the one that puzzles me. Also, instinctively I would say that with every inference cycle the biases and artifacts would compound and amplify degrading their usability. At OpenAI they seemed pretty sure it's going to work though, and they're em... A bit better than me in what they're doing.
@fgadenz5 ай бұрын
8:17 by 2020 or 2040?
@Phosdoq5 ай бұрын
she just proved that she is human :D
@adashofbitter5 ай бұрын
Also mistook “2029” for “by 2020”… so at least 2 of the predictions aren’t that crazy with our current progress
@flain2835 ай бұрын
@@Phosdoq or did she just fool you into thinking that?
@pwlott5 ай бұрын
@@adashofbitter They are in fact shockingly prescient given current trends. Kurzweil was very smart to focus on raw computation.
@hardboiledaleks90125 ай бұрын
@@adashofbitter the narrative for sabine was "all the predictions were wrong" This is why she made the mistake. There is a bias in her reporting of the topic.
@stephens13933 ай бұрын
I think Sabine is underestimating what humans will do to actually streamline the progress. We will constantly be working on more efficient ways to power the training and better ways to refine and interpret the data being used for training. It's not even clear that human-created data is the right thing for training. AI smarter than us will create/discover data better than we have done. It may not happen like Aschenbrenner predicts, but chatgpt is already hugely transformative in how computer work is done. This is only going to expand into other areas.
@Pedroramossss3 ай бұрын
GPT is transformative? The same GPT who can't count the number of R's in strawberry?
@stephens13932 ай бұрын
@@Pedroramossss There are certain things it is good at, and certain things that it's not. Ironically, that kind of trick is the same kind of trick that people fall for before they're familiar with teasers like that, so I don't give it much weight. There's pretty much no denying the impact it has made in the past year or so. I know zero people who I can ask an arbitrary question about anything and get a somewhat informative answer, or at least a starting point to find the full answer. LLMs are _really_ good at that kind of thing. You still have to be aware of the possibility of hallucinations, but still, amazingly useful.
@Thebentist5 ай бұрын
To be fair I think we’re forgetting also about the unlocks from AGI and organoid computers drastically reducing compute needs. Remember our brain operates the neural net at waaaaayyy lower energy consumption so there’s a chance we can figure out how and do it with a lot less and may already have all the power needed currently for this ASI super cluster
@fandomguy80255 ай бұрын
I agree, in fact, it's already been done with honeybees! By studying their brains researchers reverse-engineered them into an algorithm that allowed a drone to avoid obstacles using 1% the computing power of deep learning while running 100 times faster!
@davidireland17663 ай бұрын
A very very different type of neural network
@frankheilingbrunner78525 ай бұрын
The basic fallacy in the chatter about the AI superrevolution is that a species which doesn't want to think can create a system which does.
@Hellcat-to3yh5 ай бұрын
Seems like a pretty vast over generalization there.
@douglasclerk27645 ай бұрын
Excellent point.
@danielstan23015 ай бұрын
No the worst fallacy is that they assume that a smart machine will create competition for itself or something smarter which will possibly replace/destroy the creator. That's not how life works. I also love how they assume that an intelligent machine will just want to improve itself instead of writing poetry or create stupid videos on various platforms out there like , these other smart beings already do instead of using this internet platform to improve themselves
@Hellcat-to3yh5 ай бұрын
@@danielstan2301 That’s not how life works? Humans are actively destroying its creator right now in Earth. We evolved from single cell organisms over hundreds of millions of years.
@41-Haiku5 ай бұрын
@@danielstan2301 They don't assume that. The instrumental convergence thesis was hypothesized and taken to be likely, since it was very intuitive. Then it was mathematically proven that "Optimal Policies Tend to Seek Power." Then we observed tendencies relevant to power-seeking in current systems, including strategic deception and self-preservation. If you spend some time looking through what we now know about AI Risk and honestly assessing the scientific validity of the claims being made, there is a strong chance you will become worried (as most experts are) about AI potentially ending the world during your lifetime.
@metagen775 ай бұрын
How did your earlier predictions hold up Sabine?
@Waterdiver39005 ай бұрын
like all them full of bullshit
@damienasmodeus9285 ай бұрын
Yes, current Artificial neural networks requires large amount of data to be trained on, but a real AGI will not need that. Real AGI will be able to learn like a human, with simply just observing the world and everyday experience.
@salia28975 ай бұрын
Maybe, but then nobody has a clue currently how to build such a thing.
@nickv83345 ай бұрын
@@salia2897 Treu, but we don't need to learn how to make something on that high of a level. The only thing we really need is something in between. It does not need to learn as good as a human, it just needs to be good enough at learning to match or surpass humans using our current largest data sets. Even if its stupid enough that it needs to read something over a 100.000x times more then a human, as long as you give it a good set of chips that allow it to do that, its a success. At that point you have something that can do what a human can, but can put 10 years of thinking in 10 minutes and is native to the digital world. we don't need to figure out how to make something that is just as efficient in learning as a human, the bridge between what we have now and what we want to make can do that for us.
@tonycook16245 ай бұрын
"Real AGI will be able to learn like a human, with simply just observing the world and everyday experience." - and even thats not going to be that impressive as the vast amount of humans out there are not really that smart, just adequately functional to survive their environment. I wonder what it would really take to create very high level intellegence - the sort that gets a Nobel Prize
@salia28975 ай бұрын
@@nickv8334 I pointed this out in an other post: that is the thesis of the people claiming AGI will be here soon. That it will be enough to scale up the current approach. It could be that without the learning efficiency of the brain you can never achieve AGI as the problem is that you can just not learn the same kinds of abstractions. Or maybe you could but it needs so many orders of magnitude more data or computational power that it is just not achievable. We don't know. We will try to scale up the current approach in the next couple of years, we will see, what will happen.
@generichuman_5 ай бұрын
@ggte354 It depends what knowledge you have. If you have no knowledge about something, then your explanation of something isn't worth anything. If you do have knowledge then it is worth something. Opinions aren't all created equal... I don't know why this is a hard concept.
@BD-cv3wu5 ай бұрын
The US Government has already had several projects a la Person Of Interest style. They are WAY ahead of anything done in the civilian world. She is completely clueless on technology. Scientists are terrible on telling where tech is actually at, because most tech is made by computer techies in their basements or garages on spare time after work hours late into the evening and early into the morning.
@militzer5 ай бұрын
About the energy problem, i've said this on your solar panels in space video: Ditch the whole "energy beam from space" part and put supercomputers up there, then just transmit back the processed data. We could offset most energy from supercomputing on earth to space, reduce land grid usage, and have scalable "infinite" energy for space grid.
@Hollowed2wiz5 ай бұрын
But how do you cool down the supercomputers in space ? Your idea cannot work without an efficient way to dissipate the heat produced by the computers.
@militzer5 ай бұрын
@@Hollowed2wiz Well, first you place the computer in the shadow of the solar array, of course, you don't want the sun heating it. Then use radiators like in the ISS. The ISS can handle 70kW if Wikipedia is up to date. It would need a lot more then that, but the solar array should be hundreds of meters in 2 directions, so the radiators would scale with them.
@militzer5 ай бұрын
I looked at the video again it says today we use 100MW, for AI, so if the scaling is perfect today we would need just 38x38 (ISS respective dimensions). Of course the radiators can go "down" (away from the sun), if there's not enough space to grow laterally. To produce 100MW we would need around 300x300m of solar panels. The numbers are on the same order of magnitude.
@militzer5 ай бұрын
Idk how effective the heat transport from supercomputers to the radiators would be though, but i imagine its doable.
@schemage22105 ай бұрын
There is an assumption that in order to get to AGI ever increasingly sized models must be used. That may not end up being the case, which makes the "energy" cost limitation, rather less limiting.
@GhostOnTheHalfShell5 ай бұрын
There’s a fundamental problem with that concept, animals don’t need that much information to run rings around AI. Man children that think more data = more information or even relevant information, or framing don’t understand the basic problem. Animal brains do something fundamentally different than adjust token vectors in hyper large dimensions.
@kanekeylewer57045 ай бұрын
You can also run these models on physical architectures more similar to biology and therefore more efficient
@carlpanzram70815 ай бұрын
I'd think so too, but apparently it's not that easy. Anyway, we WILL eventually inch forward with more and more Efficient architectures. Very obviously the amount of energy you need for intelligence and computing is actually quit small. I get 100iq for a bowl Of noodles.
@GhostOnTheHalfShell5 ай бұрын
@@carlpanzram7081 The more relevant question is method. LLM aren’t a model of animal intelligence. It’s the wrong abstraction.
@schemage22105 ай бұрын
@@GhostOnTheHalfShell This is the point for sure. LLM's are surely a piece of the puzzle, but they aren't the entire solution.
@Thomas-gk425 ай бұрын
Thank you 😊
@MrHailstorm004 ай бұрын
I feel all the pro-AGI and anti-AGI arguments I've seen so far have only focused on extraneous factors: like energy constraints, financial incentives, ulterior motives, etc. These are valid arguments, but I'm sure same type of arguments have surfaced time and again whenever a new wave of technological revolution crushes on the beach of human history. They are valid, but not insightful. A much more elucidating argument would be if we assume ideal external conditions (unlimited resources, incessant financial support, stable political environment, so on), is the current framework for AI research THE CORRECT FORMULA for general intelligence? Several questions that can be discussed: 1. Is scale a necessary factor in achieving AGI? If so, how is it measured? If we compare one activation unit in neural networks to one neuron in human brain, then the largest LLM has already surpassed the number of neurons in our brain by one or two orders of magnitude, why are we not seeing emergent intelligent behavior yet? 2. Is back propagation and cross-entropy loss the final answer, or at least a faithful approximation, to how intelligence is built in human brains? All neural networks are trained to maximize proximity to some statistical distribution, but is that what intelligence is? That our brain just works by reflecting correctly the randomness in our surroundings? 3. Is reinforcement learning the potential solution for "creating" intelligence in neural networks? It seems the most promising as we have seen successful experiments modifying animal behaviors through positive/negative reinforcement and leading animals to behave "intelligently" by human standards. But can all intelligent behavioral traits be elicited by reward and reinforcement? Is intelligence purely behavioral? Even so, have we arrived at the correct reward function? I can say for sure that none of the neural networks has achieved AGI based on the three points above. But another useful point of discussion that ensues would be: does AGI need to be achieved before the society feels its impact? And I can also say for sure the answer is a resounding NO. We are already feeling the impact of deep learning and reinforcement learning technology and all it takes is for a majority of people have the PERCEPTION of intelligence in their interaction with the technology. So the real question here is not whether AGI is around the corner, or do we need AGI, but how do we cope with a world where robots are BEHAVING more and more like human and more and more difficult for average people to tell the difference? I'm sure it doesn't take AGI to realize a dystopian society where human controls human via generative AI technologies. How can we avoid that would be a much more interesting topic.
@frgv40605 ай бұрын
Sounds like autonomous driving yet again only degrees of magnitude escalated up. The “if you can’t still solve the little problem just look for a bigger problem” approach hehe.
@taragnor5 ай бұрын
Yeah lol. How about this guy worry about figuring how to get an AI to drive a car before he gets into his dream of massive robot swarms that can run an integrated autonomous mining/manufacturing/construction operation.
@CaridorcTergilti5 ай бұрын
@@taragnorautonomous driving is solved, it is not used because of politics
@frgv40605 ай бұрын
@@CaridorcTergilti Nope. Autonomous driving as long as everything stays “normal” on a route is solved. Real full autonomous driving it is not. So you can say it is a political reason, as many restrictions are a political reason like the use of guardrails on stairs and bridges as many things, norms and restrictions that aren’t technically necessary, unless you want to keep alive that clumsy minority that has the audacity of being bumped or slip while on that bridge. Edit: Imagine that swarm of robots with the current driving capability of an AI (how they can be realistically trained), on a natural environment going mining. I can imagine it and it is funny.
@CaridorcTergilti5 ай бұрын
@@frgv4060 imagine a truck that drives 16 hours a day because the driver can sleep on the highway and only drive the difficult parts. For normal cars, the car can just stop and be teleoperated in case of problems. "If there's a will there's a way"
@aaronperrin61085 ай бұрын
"Waymo's driverless cars were 6.7 times less likely than human drivers to be involved a crash resulting in an injury, or an 85 percent reduction over the human benchmark, and 2.3 times less likely to be in a police-reported crash, or a 57 percent reduction."
@alonamaloh5 ай бұрын
I was involved in computer chess in the late 90s and in computer go around 2010, and I've seen how quickly we move from "these things are cute but they won't beat the best humans at this task for many decades, if ever" to "these things are competitive with the best humans, but a combination of both is best" to "humans don't have a chance, and they don't really understand what's going on, compared to the machines". I think Leopold Aschenbrenner's prediction will more or less come true, even if he didn't get all the details right. In those other fields, it took several breakthroughs beyond "more compute" to get to total domination by the machines, but there are a lot of smart people working on this, so I'm sure there will be breakthroughs. Also, data is only a limit with the current imitation-based techniques. If someone figures out a good mechanism to use the current AIs to produce higher-quality data that can be used to train the next AIs (like AlphaZero did for those games), we'll have an explosion in performance without additional external data. I think this will happen.
@Mr_Boifriend5 ай бұрын
what examples are there of “these things”, besides games? genuinely just curious what you are referencing
@alonamaloh5 ай бұрын
@@Mr_Boifriend I was talking just about chess and go. Of course it's not automatically the case that general intelligence will follow the same pattern, but I see a lot of parallels. For instance, when Deep Blue beat Kasparov, a lot of people thought that to get stronger play you would need an even larger and power-hungry computer, yet today Stockfish running on your cell phone would beat Deep Blue very consistently.
@RawrxDev5 ай бұрын
@@alonamaloh I feel one of the issues with that however is the "relative" simplicity of those games, to a computer, those are just puzzles, its solvable, to go from moving pieces around a fixed board to developing improved algorithms is a big step (not trying to undermine chess and go, its just that the intrinsic points of the game are very pro computer so it always made sense to me that they would beat humans)
@alonamaloh5 ай бұрын
@@RawrxDev You could very well be right. But it's also possible that, once the correct representations have been discovered, reasoning is also just a puzzle. Before computers could play chess or go well, most people would have agreed that making a computer that plays those games well would be an amazing feat of AI; now we see it more as engineering. Every challenge seems mundane once we have figured it out. I just suspect that everything a human can do will soon be in that category.
@RawrxDev5 ай бұрын
@@alonamaloh That is a very real possibility, my personal hypothesis (I'm a cs major, take this with a grain of salt) is that in order to have true logical and reasoning, a perquisite is understanding, and therefore awareness to the problem in the first place, which could require perhaps some baseline level of consciousness. Its possible it could just be some complex mathematical application, but again, my personal thought is that its more complex then that.
@vvm_signed5 ай бұрын
Sometimes I’m wondering what would happen if we invested a fraction of this money into human intelligence
@generichuman_5 ай бұрын
ugh... so edgy...
@notaras19855 ай бұрын
@@generichuman_ wrong. What he suggested is extremely efficient
@elizabethporco82635 ай бұрын
D
@rutvikrana5125 ай бұрын
Nah we have that time and money for hundred of years nothing can compare to AI advancement we are achieving today. It will take time but I am pretty sure AI is not a bubble like other rapid industry. I mean even developers don’t know how AI work and AI don’t stop learning. We can’t predict AGI might come earlier than we imagine.
@drakey66175 ай бұрын
@@rutvikrana512what do you mean developers don’t know how AI works? They certainly do. Everyone is just surprised that these simple ideas work so well.
@daniel-bc5sp2 ай бұрын
Its quite interesting how divided the predictions about this is within the AI field. Apparently Asia are more prone to believe that AGI is around the corner or within our lifetime than people in the west. I can't make my mind up on where I lean as it's such a 50/50 divide. Japan is constructing a 'zeta class' supercomputer which is assumed to be completed before 2030, which I read could completely revolutionise the AI playing field and even bring us further to AGI within our lifetimes.
@josdejongnl4 ай бұрын
I find these arguments a bit short-sighted. A future lack of data and energy I think assumes that there will not be any major innovation on these two topics. I can imagine a future AI will not need as much data to train on as the current AI and works more similar to how a human toddler learns. And AI advocates are hopeful that the industry will be able to develop much more efficient hardware, potentially solving the energy problem. Of course we don't know whether these innovations will actually happen in the future, but good to keep them in mind.
@puelocesar5 ай бұрын
I still don't get how LLM systems alone will achieve AGI, and all explanations for it until now were just "it will just happen, just wait and see"
@libertyafterdark64395 ай бұрын
The idea is that contemporary architectures operate around building representations (abstractions inside the model that may or may not be roughly correlative to concepts) from the dataset. What it does now is leverage those representations to produce outputs, but importantly, it leverages representations of a model with X scale trained on Y data. So far, there seems to be a direct correlation between models being able to do more things, and those models getting “bigger” So with all of this in mind, a bigger model should be more “intelligent” if we are willing to reduce that to the number and permutations of representations it can utilize. That’s why many see a future in which LLMs (or something very close to them) will lead to AGI.
@Lolatyou3325 ай бұрын
It's not the only way AI currently works and they have different algorithms ontop of the LLM to increase accuracy. Otherwise how could the AI ever get better? You can't just continue to provide data to a model and make it smarter, there has to be algorithmic changes to increase it's ability to scale both in terms of different concepts and to be able to be interacted with from consumers in scale.
@SomeoneExchangeable5 ай бұрын
They won't. But somebody ought to remember the other 50 years of AI research...
@netional51545 ай бұрын
My thoughts exactly. The current AI systems are 'just' super advanced association algorithms. But there is no emerging identity that really understands things. The current AI systems have just as much consciousness as a pocket calculator.
@notaras19855 ай бұрын
@@netional5154only God creates conscious beings with souls
@edwardduda42225 ай бұрын
I work in the industry. No one has an idea of how to get to AGI, even Yann Lecun. We’re at a point to where we’re literally running out of data to train models. ChatGPT4 is just a collection of models with a voting mechanism which why it seems more intelligent.
@notaras19855 ай бұрын
Only God creates souls. Humans cannot
@stedyedy235 ай бұрын
@@notaras1985 keep your silly religion out of science debates
@Regic5 ай бұрын
Mixture of experts (the method gpt-4 is probably using) is not a voting mechanism, what you are thinking of is ensemble. Mixture of experts is quite the opposite, it learns where to route the computation while ensemble computes the result of multiple models and takes an aggregate of it (majority voting, weighted average, etc). Mixture of experts only uses a fraction of one trained network, ensemble runs multiple models. This is a weirdly common misconception that is based on how they imagine it based on the name only. Read the paper about it maybe...?
@TheManinBlack90545 ай бұрын
Why even? Hes not THAT influential and many of his takes have been proven wrong
@ptonpc5 ай бұрын
@@notaras1985 😂 You are silly. Try to keep up with reality. PS. If you are pretending to be a god botherer, you might might to hide those videos on your channel.
@flwhitehorn4 ай бұрын
It's like the loudness you get out of a speaker. The system is self-limiting. There's a finite amount of energy you can push through any medium.
@jeffgriffith96925 ай бұрын
You made 2 errors on the predictions quotes - both would put it about now especially Ray's prediction of 2029 sounds spot on...
@123100ozzy5 ай бұрын
it does not. I cant overstate how far we are from actual itelligence.
@hardboiledaleks90125 ай бұрын
@@123100ozzy You can't spell "can't" or "intelligence" properly so why should we listen to you about anything intelligence related?
@scotte47655 ай бұрын
@@hardboiledaleks9012 You have to admit that the spelling does support the point.
@MrAlanCristhian5 ай бұрын
Every engineer on the planet knows that technology doesn't improve infinitely. Eventually it just stop. And that will happen with AI. And also, AI improvement is alreading stalling.
@ruzinus_5 ай бұрын
@@123100ozzydon't confuse intelligence with sapience.
@richard_loosemore5 ай бұрын
Funny coincidence. I’m an AGI researcher and I published a landmark chapter called “Why an Intelligence Explosion is Probable” in the book “Singularity Hypotheses” back in 2012. But that’s not the coincidence. One of my projects right now is to re-engineer my toaster, using as much compute power as possible, so the damn thing stops burning my toast. 😂 Oh, and P.S., Sabine is exactly right here: these idiotic predictions about the imminence of AGI are bonkers. They haven’t a hope in hell of getting to AGI with current systems.
@LiamNajor5 ай бұрын
SOME people have a clear head about this. Computing power alone isn't even CLOSE.
@fraenges5 ай бұрын
AGI beside - even with the current systems we are already able to replace a lot of jobs. AI just has to do the task as good as the average worker, not as good as the best worker. On our way to AGI the social changes, impact, unrest from constant layoffs might be much greater than that of a super intelligence.
@jyjjy75 ай бұрын
As an supposed expert please explain what Leopold is getting wrong, why this tech won't scale and what your definition of AGI is
@reubenadams70544 ай бұрын
You are overconfident, and so is Leopold Aschenbrenner.
@richard_loosemore4 ай бұрын
@@reubenadams7054 No, I do research in this field and I have been doing that for over 20 years.
@evanlughfahy97785 ай бұрын
Anyone notice the discrepancy between dates spoken and dates presented graphically? At least 3
@mygirldarby5 ай бұрын
Yes.
@hardboiledaleks90125 ай бұрын
This is what you get when you made up your mind about a topic before researching it. Completely bias
@mjaymo5 ай бұрын
Love your content. Thank you! Facts and insights with comedy. Brilliant.
@tobiaskpunkt35955 ай бұрын
Regarding failed predictions, you should also acknowledge that in terms of ai, there were many predictions that already were accomplished years earlier than predicted.
@johndow16455 ай бұрын
Also, many breakthrough technologies (planes, controlled fission) surprised experts who were on the record saying that those breakthrough were "decades away"
@tabletalk335 ай бұрын
Examples?
@drachefly5 ай бұрын
@@tabletalk33 MATH benchmark, for one. It was made to be unreasonably difficult so that it would be able to track AI's progress over a long period of time. Latest AIs get over 90% on it after just a few years.
@Jo_Wick4 ай бұрын
@@dracheflyGreat example. Here's another: "I hope none of you gentlemen is so foolish as to think that aeroplanes will be usefully employed for reconnaissance from the air. There is only one way for a commander to get information by reconnaissance, and that is by the use of cavalry." General Sir Douglas Haig, British Army Sometimes people are resistant to change, but change comes whether we want it or not.
@sarcasticnews11954 ай бұрын
"FLYING MACHINES WHICH DO NOT FLY" New York Times, December 8, 1903. (The Wright brothers flew literally nine days later.) "It might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanics in from one million to ten million years-provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in organic materials. No doubt the problem has attractions for those it interests, but to the ordinary man it would seem as if effort might be employed more profitably." This prediction was especially retarded considering that balloon flight had already existed since the 1700s, and engines since the 1800s. It doesn't take a mad genius to put those two concepts together.
@richardlbowles5 ай бұрын
Artificial Intelligence might be right around the corner, but Natural Stupidity is here with us right now.
@tabletalk335 ай бұрын
Humans make poor, inconsistent decisions and are easily swayed.
@williambreedyk78615 ай бұрын
No translator for all languages yet in sight. The idea of "meaning" has completely disappeared under statistical correlation. Going to hit the ceiling soon, with no real progress until someone gets it right.
@SEIKE4 ай бұрын
Your channel is the best thing about the internet right now ❤️
@siceastwood27145 ай бұрын
07:58 i don't think that failed predictions of the past are applicable with this kind of AI. The main point of the transformer AI architecture is, that it is fundamentally different to every computing and AI prior to this and actually works like human intelligence does. Like all the past predictions were based on something that sounds similar but is not really comparable
@nycbearff5 ай бұрын
No one knows yet how human intelligence works. Ask any neuro scientist. There are hundreds of different hypotheses about intelligence, but we're a long way from understanding the brain. We see the results, but don't know how the brain does it. The fact that you seem to think AI developers do understand human intelligence is an example of how these predictions go wrong - if your basic assumptions are so wrong, your predictions can only be very wrong too.
@deitachan78785 ай бұрын
We do not understand enough about how human intelligence works to say this new kind of AI works the same way as us. At best it seems that the ai very crudely approximates something like a neuron but a lot seems to be missing. I think the biggest advancements that could be made in ai would be achieved from studying the inner workings of human neurons. They are extremely energy efficient comparatively and do not require much training data to learn things. You can see something once and be able to identify it from different angles, lightings, etc. Give a million pictures to an ai then feed it a color inverted image of that and it has no clue.
@siceastwood27145 ай бұрын
@@deitachan7878 of course there's probably a lot of differences between AI and straight up human intelligence. "But the main difference between previous AI models and the Transformer architecture is that Transformer models can use attention to process information globally, while previous models work sequentially or locally." - Chat GPT Traditional AI works sequential and is just text completetion predicting word after word. It is basically pure logic saying only something because something else was prior to this. Humans are not thinking in pure logic, that wouldn't make any sense. Instead we do something because of something AND in order for something to happen. We relate actions based on the past and the possible future while reflecting with experience. The transformer architecture now does something similiar modelling answers based on what came prior to a certain word and setting it in relation to the words that follow while reflecting these relations based on training data. It is now able to say something because of something and in order for something to happen based on the data acquired. Tbh i have no clue about technicalities, but i believe, that this is by far the biggest difference between logical computing and human intelligence. Everthing else is more like technical details, resource and power constraints.
@Velereonics5 ай бұрын
It's like the antimatter 747 guy or the hyperloop bros who probably knew even at the conception of their ideas that they could not possibly succeed but when a journalist asks how close we are they say "may as well be tomorrow" because then they get money from idiots who think you know it's a long shot but mayeb
@libertyafterdark64395 ай бұрын
This is completely undermining the fact that products do exist, and gains ARE being made. You can think it’s too slow, or that there’s, say, an issue with current architectures, but there’s a big difference between “not there yet” and “smoke and mirrors”
@hardboiledaleks90125 ай бұрын
If you believe what you said relates to A.I, you are firmly in the "I have no idea what is going on" category.
@Velereonics5 ай бұрын
@@hardboiledaleks9012 You dont know what part of the video I am referring to I guess, and that is not my problem.
@TheManinBlack90545 ай бұрын
@@todd5857 do you really think that AI researchers say all this for grants and money? Maybe they actually do believe what they say and arent being greedy or manipulative
@Vastin5 ай бұрын
@@libertyafterdark6439 I'm of the opinion that these researchers are seriously overestimating their likely future progress AND I think it's moving too fast regardless. I don't really see any way that AI development does anything but further concentrate vast amounts of wealth and power into a very small class while disenfranchising the rest of humanity. After all, if you have a smart robot workforce, what are people actually *good for*?
@dangerdom9045 ай бұрын
We're running out of text data, not data. The amount of information in the world is essentially endless.
@2ndfloorsongs5 ай бұрын
Not sure about "endless", but I'd be willing to bet on "lots more".
@smellthel5 ай бұрын
There’s always synthetic data. Also, ChatGPT 4o gained a lot more understanding of the world because it was able to be trained on different types of data.
@outhoused5 ай бұрын
yeah but i guess, theres much to be learned by associating different texts and reading between the lines. maybe that one paragraph in some text document really compliments another one thats seemingly unrelated etc
@marwin43485 ай бұрын
@@2ndfloorsongs There is an effectively infinite amount of Data in the Universe
@DingDingPanic5 ай бұрын
It needs time be high quality data and there is a severe lack of that…
@TheJackSparrow25255 ай бұрын
Sabine - I love you! You crack me up because you see things in the big picture and make subtle jokes which are so funny to hear because you’re just right! Love your channel and your work. Regards, Jamie.
@SomeMorganSomewhere5 ай бұрын
"It's robots all the way down" *rolleyes*
@PeterPan-ev7dr5 ай бұрын
Artificial Stupidity is growing faster than Artificial Intelligence.
@gibbogle5 ай бұрын
Natural stupidity.
@williamkinkade25385 ай бұрын
Only for Humans!
@PeterPan-ev7dr5 ай бұрын
@@williamkinkade2538 Humans infected with their senseless and stupid data the AI.
@Bobbel8885 ай бұрын
~ the idea of nasty children bears fruit, the brighter they are
@markthebldr68345 ай бұрын
No, it's authentic stupidity.
@kitsura5 ай бұрын
Moravec predicted machines will reach human level intelligence by 2040. But you said 2020 which has already passed.
@jamescheddar48965 ай бұрын
human level at what? problem solving?
@PeterT-i1w5 ай бұрын
human level AI is like fusion power, it will be always 30 years away
@jamescheddar48965 ай бұрын
@@PeterT-i1w there's "data" that we have in our genetic evolutionary makeup that was compiled over a few billion years of universe exposure. I don't know if you can simulate enough generations to actually replicate it
@hardboiledaleks90125 ай бұрын
@@jamescheddar4896 lmao. Take 20 babies at birth, put them all in cubes for 18 years (maintain their basic needs like food and water obviously) then take them out. Compare these 20 babies. I think you would find they are literally 20 identical copies of each other. All clueless. Humans are learning algorithms, nothing more
@centripetal61575 ай бұрын
@@jamescheddar4896 Yes, the ability to self analyze, self assess and change the methods for analysis is "human level intelligence" - something AI at the moment doesn't have
@dasanjos5 ай бұрын
Great video - specially with the out takes at the end