My wife and I are currently working on an AGI, should be ready and fully trained in 18-25 yrs. Current budgeting indicates it'll take far less than 125 billion dollars.
@PapiDey-dv4gw4 ай бұрын
Fr?
@Mankepanke4 ай бұрын
Is it really artificial, though? Me and my wife have shipped three GIs and they are of a higher quality than current AGI attempts honestly. Fraction of the budget too.
@vaibhav57834 ай бұрын
couple goals
@abdvs3254 ай бұрын
But can these AGIs make me a hip-hop song written and sung by Spongebob Squarepants in less than 2 mins?
@vaibhav57834 ай бұрын
@@abdvs325 Well that technology is already here, We can do that now.
@ryzikx4 ай бұрын
cant wait for ASDI (artificial super duper intelligence)
@anomite1214 ай бұрын
lmao
@kylebroflovski63824 ай бұрын
I'm personally a proponent of AWMI instead (artificial "woah, mama!" intelligence)
@marc_frank4 ай бұрын
all the while we've had ASD for a long time
@quickpert13824 ай бұрын
comments like this make me smile
@edoson014 ай бұрын
😂
@sakunpanthi15424 ай бұрын
The production quality of these videos is astounding.
@aiexplained-official4 ай бұрын
Aw thanks man
@danielrodrigues49034 ай бұрын
Easily the best and most well-researched AI influencer outside of Two Minutes Papers and Bycloud right now. Everyone else mostly spouts sensationalist garbage and the majority of them have no idea what they're talking about.
@jeff__w4 ай бұрын
0:55 “If they’re wrong, this [the $125 billion spent on data centers] could all be viewed as the biggest waste of resources in human history.” Gee, I dunno-the $3 trillion spent on the US war in Iraq (as estimated by the Harvard Kennedy School) seems like it’s larger, if we assume that money spent has _some_ relationship to resources used. (The $125 billion still _could be_ a big waste of resources, though.)
@aiexplained-official4 ай бұрын
Good call
@DerekSmit4 ай бұрын
I would also like to add that even if LLMs aren't the solution, the data centers can still be used for other (AI) things.
@jeff__w4 ай бұрын
@@DerekSmit Right-so it might not be a total loss.
@noobhemingway4 ай бұрын
The war comparison feels disingenuous because I think the continued spending on war and the army is probably a better point of comparison.
@Avruthlelbh4 ай бұрын
*By an independent corporation.
@BooLightning4 ай бұрын
they found a "this one weird trick can make you a billionaire" video
@GolerGkA4 ай бұрын
Be one of the top experts in a rapidly growing and capital-intensive field? Easy
@hmind98364 ай бұрын
@@GolerGkA Yeah I could do that but I'm already making good money with dropshiping, so whatever
@abhishekak96194 ай бұрын
@@GolerGkApromise stuff you have no clue if possible or will pan out in any meaningful way and risk money in the billions. Its good that they are experts. They aren't gods
@cloudysh4 ай бұрын
lmaooo
@InstantDesign4 ай бұрын
Just to note I rely on and appreciate you for an honest perspective on this field.
@georgegordian4 ай бұрын
There are so many channels out there that try to tell everyone about something shocking or stunning happening in AI on an almost daily basis. It is nice to have a channel with information you can trust to be informative, accurate and absent of any hyperbole.
@clickpwn4 ай бұрын
Yeah seriously, I am not pessimistic about AI but all the AI HYPE channels are so obnoxious and they are getting more and more desperate.
@vio_tio124 ай бұрын
love how he covers the news from a neutral POV rather than buying into the hype! Great job!
@ph33d4 ай бұрын
The first thing that an AGI/ASI should focus on is making a more efficient version of itself. The human mind runs on 20W. I see no reason why we shouldn't be able to get an AGI to run on < 1000 W.
@alfinal57874 ай бұрын
This is Turing Police. Stay where you are, we dispatching agents to your place.
@jan.tichavsky4 ай бұрын
That will eventually come. We might need the brute force step to acquire helpful tools which will actually provide a much more intelligently designed model. And then it snowballs towards singularity right there.
@HoD999x4 ай бұрын
yes - either there are much more efficient algorithms, or we need to use organic processors. "we have a brain orbiting the planet" would be awesome
@bornach4 ай бұрын
And the AGI/ASI will realise it is wasteful to grow its own organic brain when there are already 8 billion fully grown brains on the planet. 😂 At least we won't become mere batteries when plugged into the ASI's Matrix
@Steve-xh3by4 ай бұрын
Natural selection, though a blunt, unintelligent instrument, has had hundreds of millions of years to optimize the brain. It will be hard for us to top that.
@revengefrommars4 ай бұрын
A datacenter in space would appear to have multiple issues, not the least of which is maintenance. Even with the advent of SpaceX, it's not exactly cheap to send all the parts into orbit. Then, even though they say "passive cooling", how are you going to reject a significant percentage of a gigawatt's worth of heat? The ISS already has to use a huge radiator to reject a much smaller amount (maybe 1/1000th?) of heat into space.
@johncasey95444 ай бұрын
It's a very, very stupid idea and will remain such for a long time.
@pik9104 ай бұрын
AGI will solve that, I can feel it
@ronald38364 ай бұрын
Cooling how? The vacuum of space is the perfect heat insulator.,
@juandesalgado4 ай бұрын
And they are such a tempting missile target...
@juandesalgado4 ай бұрын
Jokes apart, we live in an era where access to low Earth orbit is about to get much cheaper, especially if projects like SpaceX's Starship start working.
@BrianMosleyUK4 ай бұрын
Fascinating. Biggest stakes being played right now, and 99.9% of the population have no idea whatsoever, what is happening.
@SergiusXVII4 ай бұрын
This gets a lot of media coverage; I highly doubt 99.9% of the world's population doesn't know what's going on.
@BrianMosleyUK4 ай бұрын
@@SergiusXVII what do you think is going on?
@cscs91924 ай бұрын
@@SergiusXVII I think he mean that most people have no idea what this rapid AI evolution can affect us. I think its a valid statement, even we who have more understanding of this area, are struggling to translate this to real futur image.
@davidddo4 ай бұрын
@SergiusXVII legitimately nobody knows what ssi is at the moment. It's less than 99.9%
@bztube8884 ай бұрын
@@SergiusXVII I don't think the majority of people believe that SI - hopefully SSI - will ever happen. I think that's what he meant.
@Tzhz4 ай бұрын
$1B raised, 3 months old, no information, $5b valuation. As an accountant, I can smell a scam.
@Kaneki9094 ай бұрын
the smell is really sweet and irresistible though .
@boremir39564 ай бұрын
This just makes me appreciate how special our brains are.
@Macatho4 ай бұрын
It's kinda crazy. How chatGPT for example has vast and vast amounts of knowledge and an insane generality compared to your random 90 IQ bloke... But fails at decently easy tasks.
@GomNumPy4 ай бұрын
Ironically, this should also make us realize how inefficient our biological intelligence might be. A truly advanced artificial intelligence should be able to achieve human-level cognition with just a tiny fraction of these resources.
@ErgoThink4 ай бұрын
Naaah, look at it, neurons giving themselves a standing ovation. Time for the humble ones and zeros to take over the applause.
@ClayMann4 ай бұрын
I of course agree. But I found it fascinating in a recent talk that Demis Hasabis of Google Deep Mind suggested that the brain is no longer a driver, marker, map of where to go with A.I to make AGI. He just says its an engineering problem now and well understood. That was wild to hear because I specifically remember Demis saying some years ago that studying the brain was the secret to finding out how to make intelligent machines and I believe he studied that subject deeply himself in his younger years.
@jan.tichavsky4 ай бұрын
Brains kinda brute force the intelligence by truly massive scaling. Their advantage is they're not static, they're self modifying structures unlike current static LLMs. But they are slow to learn, slow to communicate with others, have tiny operating memory, short attention span, lossy memory and need to rest often. Computers can do all of these better once we figure out the correct system architecture. Which I believe should be hybrid just like our brains are composed of parts that have different functionality, specialization. Basically add a knowledge pool, reasoning center, math coprocessor, introspective thoughts, creative subsystem and so on. Then we'll have truly superior AI.
@xviii57804 ай бұрын
this is not a bubble at this point, it's a bomb
@davidddo4 ай бұрын
lmao, fun world
@danielrodrigues49034 ай бұрын
Nick Bostrom called it that, not because of the market and it failing, but because of what it could do if it does work and turns out to be misaligned.
@davidddo4 ай бұрын
@@danielrodrigues4903 think that was what op meant
@Wigglylove4 ай бұрын
These valuations are totally nuts. I also wonder how many % Ilya has. It has to be in the very low single digits. Maybe even less than 1%
@edoa4 ай бұрын
Assuming a pre-money valuation of $5bn the investors got 16.6%. 3 founders with Ilya being the pulling power he's prob got 15%+
@Wigglylove4 ай бұрын
@@edoa That would be absolutely nuts. $750 million net worth on paper out of no where? It does not really make sense to me. Is he really that much better than anyone else?
@swingnd4 ай бұрын
@@WigglyloveThe Wozniak of AI
@vectoralphaSec4 ай бұрын
@@Wigglyloveyeah. He's a powerhouse of AI
@Greg-xi8yx4 ай бұрын
Not a chance it’s anything under 10%
@squamish42444 ай бұрын
I watched a recent interview with the co-founder of DeepMind, Shane Legg, and he didn't even mention LLMs as the path to AGI. He said DeepMind was working on other architectures to get them there. He maintains DeepMind's 2030 timeline. He also pointed out that the famous alien-seeming "Move 31" in AlphaGo's game against Lee Sedol was not made by an LLM.
@danielrodrigues49034 ай бұрын
Well yeah, DeepMind researchers are ardent believers of reinforcement learning, and the AlphaGo / AlphaZero models were based on the same principle. AlphaProteo is making large strides in biology right now. There could be multiple paths to AGI; regardless of whether LLMs are a path to AGI or not, DeepMind should continue following their RL-approach, it's insanely good either way and could be a second path to AGI (or the only one if LLMs don't work out).
@CoolIcingcake34674 ай бұрын
@@danielrodrigues4903 we could also combine neural network with symbolic system. which is, neuro-symbolic system, and train it using deep reinforcement learning and massive dataset. that, i think is the most reasonable first step towards AGI
@seniorp94444 ай бұрын
We don’t need ASI or even AGI for massive transformation of work and the economy. A lot more could be done with current multi modal AI if we developed the applications and workflows to leverage it. But why invest the time and money to do that if something better is coming next month?? IF we ever reach a point where we can’t keep scaling up intelligence, we will simply turn attention to squeezing maximum productivity from the systems we have built even if they are not considered AGI or ASI and the results will still be amazing. I’m optimistic either way.
@danielrodrigues49034 ай бұрын
Yeah, we could legitimately automate a shit ton of jobs right now if we spend time trying to maximize what's possible of what we currently have.
@dustinbreithaupt93314 ай бұрын
I feel the AGI.
@JackTheOrangePumpkin4 ай бұрын
Me too Ilya, me too
@jyjjy74 ай бұрын
I would ask for permission before feeling the ASI, this ain't a petting zoo
@MacRaeVallery-y5j4 ай бұрын
😂
@ErgoThink4 ай бұрын
Take that fiber optic cable out of your mouth!
@edheldude4 ай бұрын
You'll be working for an AGI in 2 years.
@pjtren15884 ай бұрын
Data centres.... IN SPAAACEE!
@OperationDarkside4 ай бұрын
Let's just hope, if scale really is the key, it allows us to find a way to scale a reasoning model down to manageable levels. If anything, we need something reasonable to turn to in times like these.
@cacogenicist4 ай бұрын
These systems are going to have persistent, weird failure modes and large holes in their world models until they are both: 1) trained on integrated multimodal data, where video, audio/text/speech, and tactile data, etc, are deeply associated down at the level of data collection (e.g., anthropomorphic robots with fancy artificial nervous systems); and, 2) have ongoing updating of weights in some way akin to biological brain plasticity.
@chrisanderson78204 ай бұрын
Not convinced that pure scaling will do it. Pure LLMs won't give us true AGI (though they will give us incredibly capable sub-AGI systems). I do think they'll form a large core of the final form of true AGI so this investment won't be wasted but there needs to be some additional steps added to the LLM paradigm to introduce ACTUAL reasoning. I don't think it's a big step, I'm a proponent of the "brain is a meat computer made up of other meat sub-systems" model so I think it's close, but scale alone won't do it.
@darylallen24854 ай бұрын
"Introduce ACTUAL reasoning" is a very hand wavy statement. The field has been attempting to do exactly that for 7 decades.
@chrisanderson78204 ай бұрын
@@darylallen2485 I don't deny that, I can't give you a set prediction for what future "actual reasoning" might look like, that's a bit goal-posty. All I can really say is that current LLMs are NOT true reasoning. Even though we use the term liberally they are still purely giant pattern recognizers.
@perc-ai4 ай бұрын
Brother just top the cap lol, it is clear that LLMs are the predecessors to AGI. You are just fooling yourself.
@PJ-hi1gz4 ай бұрын
What is actual reasoning? How exactly do you yourself reason? Could there be alternative ways of reasoning? Can we approximate reasoning instead of needing ACTUAL reasoning?
@ZergRadio4 ай бұрын
The argument that AI models should be constructed like mathematical models stems from the idea that mathematical systems rely on defined rules and relationships, which can prevent the generation of hallucinations. In contrast, generative AI models are designed to create outputs that mimic statistical properties of the training data but do not necessarily adhere to factual accuracy. This inherent design leads to the possibility of hallucinations, as these models do not possess an understanding of "truth" or "facts" in the way traditional mathematical models do
@William_Borgeson4 ай бұрын
Humm, Multivac comes to mind :) Always did love the Last Question.
@danielmurogonzalez19114 ай бұрын
I love the last Question from Issac Asimov, you nailed it, jaja.
@aiforculture4 ай бұрын
Brilliant work as usual, and full of rabbit holes I'm going to go explore. Really appreciate you pulling all this together and how you find the common narrative threads.
@Xilefx74 ай бұрын
It's complicated only increase the number of parameter isn't good enough in my opinion to reach superintelligence.
@alvaroluffy14 ай бұрын
you could say that, but imagine we were doing this with brains, and humans didnt exist and we were doing it blindly, and we had, i dont know, 1% or 10% of the quantity of neurons and synapses in the human brain, and it is a little intelligent, it can do a few things, but its not yet fully intelligent, and there would be people thinking "man, i dont think scaling is the way, thats just brute force, there has to be another way" without realizing that with the optimal amount of neurons and synapses they will reach human level intelligence. Its true that it is a brute force method and there must be another way. But those are not mutually exclusive, we will do both, we will scale and we will create the successor of the transformer and so on. And remember that all of evolution, all of life, has surged out of brute force and scaling to insane magnitudes. So dont underestimate scaling, dont underestimate brute force. Dont underestimate the capability of a neural net, be it biological or digital, to discern patterns and learn from them. Because, at the end, we are just 100 trillion parameters AIs and we created all of civilization
@neo691214 ай бұрын
@@alvaroluffy1 yeah that might be but we know how these models work and let me tell you its very different to how our meaty brains work so the same logic doesnt apply here
@Adhil_parammel4 ай бұрын
@@neo69121 I believe in grokking and emergent properties.
@Xilefx74 ай бұрын
@@alvaroluffy1 I would tell you that natural selection wasn't only a "brute force" process, also that brute force only works if you can find the solution using brute force. Also we're talking about super intelligence not human level intelligence. By last I'm not saying that isn't possible, I'm saying that I believe that we need more than to increase the number of parameters. You can disagree with me on that, but I believe that there are better approaches at the moment, especially from Deepmind to reach that goal.
@kristofnagy13734 ай бұрын
@@neo69121 Sure, but if you have enough parameters that a "true artificial intelligence" could form then you could, in theory, make it more efficient and so produce the same early capabilites with less parametrs.
@user-sl6gn1ss8p4 ай бұрын
A complete tangent, but I like how the site @15:37 kinda looks like a board wit some chips. Kinda want to see the regions color-coded for their function
@Hiroprotagonist2534 ай бұрын
Inb4 the sigmoid curve tanks the entire economy. Thanks for a great video as always.
@callmetony13194 ай бұрын
thanks for putting out the highest quality AI commentary out there right now
@aiexplained-official4 ай бұрын
Thank you tony!
@theownmages4 ай бұрын
Distributed training is honestly more difficult than building your own power plant for the data center.
@Dom-zy1qy4 ай бұрын
Well, wouldn't similar problems arise when doing training across multiple machines within the same data center anyway? There would just be added latency. Surely, an algorithm that could synchronize the many different nodes across networks has already been developed. Maybe im not understanding the crux of the issues, just sounds like something that's been solved for years.
@RS-gn4bv4 ай бұрын
Hahaha right on. I'm sure nuclear is wide scale adoption next decade. Has to be..
@julkiewitz4 ай бұрын
@@Dom-zy1qy You have a fast interlink within a datacenter with low latency. If you have distributed training, you need to make sure transfer speeds (which cost way way more across half the world than inside an internal network) and especially latency are not an issue. I don't know how hard these factors are but they are definitely unique to distribution.
@asandraden4 ай бұрын
Phillip, Gemini 1.5 Pro 0827 is underappreciated. I had a paper that went peer review. I gave Sonnet 3.5, ChatGPT-4o and Gemini 1.5 Pro 0827 the old and new versions of the paper plus the reviewer's comments. ChatGPT and Claude both told me I have addressed the issues. At the same time, Gemini asked me to take a broader look and see that both reviewer's were quite pleased with the Theoretical Referential. Hence, it began rewriting that paper section, expanded and even changed the title, to - like it argued - change the framework of the paper, instead of focusing on the empirical results, but on the theoretical referential (backing the interpretation of the results). I was quite surprised with the out-of-the-box thinking of Gemini.
@Fatman3054 ай бұрын
Fascinating
@sashank2244 ай бұрын
Hes editing o1 fully explained. I been waiting patiently....
@devon90754 ай бұрын
Rather than sinking data centers, it seems easier to pipe cool water too shore and run a heat exchanger. That would allow much easier power sourcing -- including if the operators want to sprinkle in renewable and storage tech.
@LaurinkoSattumaa4 ай бұрын
Strawberry sounds like a low hanging fruit
@Roma885724 ай бұрын
Gemini 2 has to knock it out of the park imo. Great video as always!
@drtariqhabib4 ай бұрын
Current singular emphasis on the scaling hypothesis seems misplaced, considering that language is merely one aspect of the multi-dimensional intelligence model our biological brains exhibit. I strongly feel alternatives like neurosymbolic AI might pave the way for achieving a more advanced AGI, presenting a viable challenge to the current focus on scaling, with less investments and impact on environment.
@Talos-qd9he4 ай бұрын
I am constantly updating my browser waiting to see "AI Explained" Testing GPT o1-preview on his internal benchmark :D
@chunkslothsloth95014 ай бұрын
Same. I DEMAND MY FREE CONTENT NAO!!!
@executivelifehacks67474 ай бұрын
Why wouldn't we watch all the way to the end. No one else comes close to what you do. Fact checked, objective, intelligent, dilligent analysis, hitting the very most salient points.
@haz4dc3944 ай бұрын
SSI is so funny. Imagine if Apple in the 80s were like “we’re not releasing a single product until we have created a fully functional mobile device with Internet, video chat, apps, Face ID etc. etc..
@GoldenBeholden4 ай бұрын
Lmao, good point.
@MiraPloy4 ай бұрын
Lmao that's because you think it's the 80s, but Ilya thinks it's 2012.
@danielrodrigues49034 ай бұрын
@@MiraPloy This. Ilya has proven himself already, which is why he gets the funding. If you're comparing this to Apple in the 80s, this should be Ilya back in 2016. If 2016-Ilya asked for funding, he would be laughed out of the room.
@marcus-b4x3h4 ай бұрын
The best AI channel out there for sure
@heavenrvne8884 ай бұрын
AI Explained is going on a generational run with these videos
@eldertom4 ай бұрын
1.21 Gigawatts is all you need
@User.Joshua4 ай бұрын
Crazy that we’re racing towards super intelligence and solving the world’s problems, yet these companies would rather pursue their ambitions separately. These companies probably have enough collective compute resources to shortcut all of these scaling issues.
@PatrickDodds14 ай бұрын
It doesn't bode well does it?
@nemonomen33404 ай бұрын
Regarding scaling and diminishing returns, I’m guessing their thought process is that “even if a few billion dollars only results in a relatively small improvement, it could put us in the lead. And in the race to AGI or ASI, winner takes all.” Of course, if a company financially crippled themselves, it might not actually matter if they achieve AGI. They may have no choice but to sell at a loss. It’s a dangerous but entertaining balancing game.
@sirnicklas9414 ай бұрын
15:15 Oh my God. It looks like a microchip. O.O
@danelow4 ай бұрын
Mircochips all the way down
@sitkicantoraman4 ай бұрын
my thoughts exactly :D
@MrAlket19994 ай бұрын
Just finished your course in Coursera on Controverse terms in AI. Really interesting!! Thanks a lot 🙂
@aiexplained-official4 ай бұрын
Thank you so much Mr Alket! Honoured by the compliment
@ClayFarrisNaff4 ай бұрын
Of course, I'm eager for safe superintelligence, but for me the most hopeful note in this episode concerns power. Perhaps, if we're lucky, the immense demand for electric power to meet data needs will lead to innovation in the power generation industry and unintentionally help us break free of fossil fuels. For example, innovative nuclear and/or geothermal might do it.
@TheRemarkableN4 ай бұрын
I have to think with all of this money being spent that the big players must know something that the rest of us don’t. They seem to be very confident that scaling is all they need.
@SmileyEmoji424 ай бұрын
No, They are just confident that saying this stuff will make them rich and famous
@musaran24 ай бұрын
Scaling is needed anyways. If a breakthrough in efficiency happens, all compute will still be put to use nonetheless. In fact, Jevons Paradox says it could call for even more compute.
@danielrodrigues49034 ай бұрын
Well, scaling works for now. Nothing else does. So of course they try scaling more, they literally don't know anything else that works. 😂 There are tons of researchers looking for new approaches and architectures for things that work, but they don't get all the attention or the funding.
@Creabsley4 ай бұрын
Scaling has worked since the dawn of AI research in the 50’s. People that don’t know the history are getting surprised.
@tiagotiagot4 ай бұрын
Getting rid of heat is one o of the biggest challenges when it comes to electronics in space. In the vacuum there's nothing to conduct heat to, you depend on radiative heat loss which is much less efficient (and that is if you're not in sunlight, which adds in a lot of heat by itself). And on top of that, there's also the issue that you're gonna have to ensure the air circulates artificially, there will be no convection making hot rise and be replaced by cooler air without gravity; without good airflow, hotspots will just develop a growing bubble of hot air over it. So not only would they need to huge solar farm, they will also need a huge heat-radiator farm; and put a lot of work designing the airflow of the orbital facility (doesn't matter if it's not air, but some other gas or even just the whole thing is submerged in something, you gotta make it flow to avoid heat on surfaces that aren't directly cooled from building local heat pockets)
@LevelofClarity4 ай бұрын
I always look forward to these videos. The super-informed, levelheaded take is so refreshing.
@hersheyscoco14 ай бұрын
always get so excited when you drop a new vid
@Illure4 ай бұрын
Rogues and thieves always put a lot of points into AGI. Coincidence?
@MrSchweppes4 ай бұрын
"Substantial further improvements of these models are on the horizon" - this quote from the Llama 3.1 paper suggests that even Meta engineers are seeing that scaling is the right approach. Whether this leads to AGI/ASI is a separate question. The key point is that scaling makes LLMs smarter. CEOs of many AI companies (Dario Amodei, Sam Altman, and Elon Musk) believe we'll eventually achieve AGI through scaling. While algorithmic improvements and other breakthroughs are necessary, increased computational power is undoubtedly crucial. In their opinion, scaling compute is a fundamental requirement for advancing AI capabilities.
@Srednicki1234 ай бұрын
Nvidia stock owner spotted
@resistme14 ай бұрын
Nice, I like that you have come back to this topic. Also recently posted something about it online. It is fascinating! The commitment and also uncertainty is insane! But as long we have not a reached like 4x human brain FLOPS we should not be that surprised. We need that much for a reason and the brain is very efficient. Keep it up ❤! Would like to see you talk about updates on the expert lists and results and maybe also more about coding ai’s and alphafold related development.
@ΝγέτηςΜπανιέραλόφος4 ай бұрын
Great video! One thing I would add are the ongoing discussions about nuclear-powered gigawatt-scale data centers. There's been plenty of news around this recently. The idea is to either build SMRs on site of a data center (e.g. like the two planned gigawatt-scale data centers that should be powered by 24 SMRs from NuScale) or co-locating data centers next to existing nuclear power plants (e.g. like the planned gigawatt-scale data center from Amazon next to the Susquehanna NPP).
@alekosm4 ай бұрын
Thanks!
@aiexplained-official4 ай бұрын
Thank you alek! Really kind
@iamcoreymcclain4 ай бұрын
Came to your channel this morning looking for a new video and had a question I wanted to ask: Are there any papers you know of that discuss powering LLMs with quantum computing? My laymen’s mind tells me quantum computing may be able to solve the problem of scaling but I’m looking for a place to start learning.
@stephenrodwell4 ай бұрын
Thanks! Excellent, as always! 🙏🏼
@anatolwegner90964 ай бұрын
When this shit blows it will make the 2008 crisis look like a day at the beach
@tomikexboii54034 ай бұрын
Why do you think so?
@anatolwegner90964 ай бұрын
@@tomikexboii5403 1+1=2
@anon_1484 ай бұрын
not really because nothing is lost, those datacenters are sitll there and could be used for other things.
@RedOneM4 ай бұрын
You have to recognise the shovel sellers during a gold rush.
@tomaszkarwik63574 ай бұрын
Datacenters in space are stupid, the cooling is such a giant problem even for normal satellites. The visualization conveniently does not show any radiators, which is funny, as one would need an absolute ton of them. also that design is a micrometeorite magnet. Also for good latency one would have to be in a low orbit, and that entails either an orbital decay in at most 20-30 years (or less) for a 400km (or less) orbit, or BOTH an exorbitant fuel cost AND a very high risk of failure due to orbital derbies TLDR: That is stupid on so many levels i do not think an Orbital Enginieer saw this before the publishing of the promotional video
@WoolyCow4 ай бұрын
"BUT IT LOOKS SO GOOD AND REAL IN THE PROMO!!!!11!!1111!!!11!!!!!1!"
@chad0x4 ай бұрын
I think that the people involved in these projects re much MUCH smarter than you or me and they wouldnt have overlooked these things.
@julkiewitz4 ай бұрын
@@chad0x lol suuuure. Also everyone knows transporting stuff into space is so cheap. Just a measely 10,000 USD per kg of mass. So moving a single server rack into space would cost 10M USD. And that doesn't even account for assembly, reinforcement, the fact that it would have to be custom built to withstand radiation and extreme temperatures. But I guess the benefit is that it sounds cool
@WoolyCow4 ай бұрын
@@chad0x dumber stuff has been greenlit in the past...get the money now, figure out if it is possible later
@lime1484 ай бұрын
@@chad0x There are AI companies "valued" in the billions that do literally nothing but host and serve open-source models while adding no other value. Assuming anyone - especially a venture capitalist - is smart because they have money is a grave mistake.
@noobicorn_gamer4 ай бұрын
I’m still an ai enthusiast but what u said right there on 1:00 is what I feel like it’s heading based on how pretty much 95% of all ai investments post chatgpt has been astoundingly bad and people still don’t seem to know what’s a good bet or not.
@memegazer4 ай бұрын
Can't wait for strawberry to release so I can ask it how many letters in the word "strawbrarry" and then see if it get's the reference from the tv show scrubs
@memegazer4 ай бұрын
kzbin.info/www/bejne/pqDRlI1ngJ6Hn9E
@memegazer4 ай бұрын
I heard an interesting bit of trivia that the AI is not necessarily wrong in it's response to the question if you consider the old english spelling of the term strauberi
@existenceisillusion65284 ай бұрын
It seems pretty clear to me that it will take a combination of scaling and algorithmic advances. Also, 15:15 why does that data center also look like a motherboard
@sorakagodess4 ай бұрын
Amazing report as ever, well researched and well explained, great work.
@aiexplained-official4 ай бұрын
Thank you!
@maxbaugh93724 ай бұрын
I would be very pleased if these tech companies got around the power constraint by solving fusion and funding a significant build-out of fusion power plants.
@snarkyboojum4 ай бұрын
What's crazier is the scaling like this almost certainly won't unlock AGI and yet this kind of money is being poured into these projects. It says more about human psychology and the desire for 'AGI' than anything else.
@jyjjy74 ай бұрын
@@snarkyboojum You don't know it won't scale and their are constant advances in architecture anyway. The idea that you know better than the people running all the top tech companies and that the insane amount of money, attention and effort being focused on improving AI won't pay off and all these companies are just "scaling" and crossing their fingers is absurd.
@snarkyboojum4 ай бұрын
@@jyjjy7 I happen to work at one of those 'top tech companies', and have degrees in physics and computer science, so I have a fairly educated view. Of course these architectures scale, but scaling is extremely unlikely to unlock 'superintelligence'. If you think otherwise, I'd encourage you to read more.
@bienspasser90544 ай бұрын
@@snarkyboojum As long as scaling creates new more powerful architectures and so on...
@snarkyboojum4 ай бұрын
@@bienspasser9054 No. Scaling might lead to new architectures but not necessarily on the path to super intelligence. Blindly relying on scaling and some magical causal connection to “new architectures” giving you AGI is honestly like just shooting in the dark and hoping to hit something.
@ytrew97174 ай бұрын
or maybe it shows that some people (like a do) don't expect more than mediocrity from biological entities. This bother the humanocentrists.
@GoldenBeholden4 ай бұрын
I am absolutely an accelerationist, and I hope to see superintelligence in my lifetime, but just thinking about how much of humanity's resources is going into this makes me question the morality of it all. It's similar to the meat industry: I am not a vegan by any means, but I can't help but acknowledge I may be sitting on the wrong side of history.
@AlphaConde-qy7vi4 ай бұрын
Man, we're collectively too stupid and/or slow to solve all the non-man-made problems affecting us, particularly diseases (the biggest of all being ageing). What's immoral about developing computers to end our suffering ?
@GoldenBeholden4 ай бұрын
@@AlphaConde-qy7vi The resource usage.
@AlphaConde-qy7vi4 ай бұрын
@@GoldenBeholden Yes, so it's not worth using all these resources to cure our ailings, according to you.
@GoldenBeholden4 ай бұрын
@@AlphaConde-qy7vi It makes me question the morality, given that such an outcome is not guaranteed. That seems like a perfectly reasonable stance to me.
@AlphaConde-qy7vi4 ай бұрын
@@GoldenBeholden I wouldn't think so, but then which more moral endeavour would you pursue with these resources ?
@errgo27134 ай бұрын
I can't help but find this scaling hypothesis utterly hubristic
@veracityseven4 ай бұрын
It's not hard to imagine that in the same way computers went from warehouse sized to being held in our hands, that AI will do the same. (Ignoring the possible extinction level events that loom over us)
@WhyInnovate4 ай бұрын
You know you’re in an investing bubble when companies less than 1 year old are in the billions of dollars.
@juandesalgado4 ай бұрын
In 1943, the city lights in Philadelphia would dim down (says the legend) when ENIAC was turned on. I suspect we are coming back soon to dimming city lights... or actual brownouts, for that matter.
@MrAlket19994 ай бұрын
Thanks for the course on Coursera :-) Really enjoyable to watch!! Please create also some practical courses.
@aiexplained-official4 ай бұрын
I am working on something with AI Jason on either Cursor or Replit but let's see! Focused on Simple Bench at the moment! Thank you for the kind words.
@ronald38364 ай бұрын
The vacuum of space is the best possible heat insulator. A data center in space cannot be cooled.
@jonathanlivingston73584 ай бұрын
I love the level headedness reporting. It’s your best feature
@aiexplained-official4 ай бұрын
Thanks jonathan
@Cemizi4 ай бұрын
Forget ASI. Main news is that Ilya has hair
@jimj26834 ай бұрын
Hair typically grows back when you become a billionaire. Just look at Elon Musk. It is some type of unexplained biological phenomenon.
@Jesus228FBI4 ай бұрын
not on the top of the head though
@neo691214 ай бұрын
with enough money anythings possible
@bgtyhnmju74 ай бұрын
The beard looks good. More hair, one way or another.
@harrysvensson26104 ай бұрын
Nice lookism Cemizi.
@jamesyoungerdds79014 ай бұрын
Another great video, Philip! I can't help but wonder with these advances - if we're still using Tranformers, are all these advances just clever ways of using a very complex workflow of many (thousands?) of multi-modal agents with world-models, physics engines, and very clever system prompts, enough time and enough data? Will the emergence of AGI just be the current technology underpinnings but end up being that a multi-modal transformer model just successfully replicates a conscious human in logic and reasoning?
@TheUnknownFactor4 ай бұрын
Data centers in space!? How are you going to dissipate heat!?
@ryzikx4 ай бұрын
imagine data centers on the dark side of the moon🤔
@roshaneforde4 ай бұрын
Wouldn’t it be better under water
@LiveType4 ай бұрын
Really really really big radiators. The radiators alone would be well over 50% of the cost of such a structure.
@byrnemeister20084 ай бұрын
Also the issue would latency. As you would need to be geo stationary. Not going to happen.
@RuslanLagashkin4 ай бұрын
One can "compress" heat in a particular point of the station (by using matter expansion and compression (same principle used in the fridge). Then attach radiators to that spot.
@nekony35634 ай бұрын
The biggest challenge is not the compute, but the liability. Having $100b of investment money makes you a good target for law suites because a model answered something offensive.
@konstantinlozev22724 ай бұрын
"We just invented the wheel, but give us 125 billion USD and we will build you a spaceship. By making the wheel bigger. Trust us!" 😜
@danberm17554 ай бұрын
I'm thinking of model parameters as a storage and retrieval mechanism, that has aspects of set theory via equations. If that analogy holds true then it seems like training has the potential to ground the models and construct language understanding + logic in a way that is truly beyond just backpropagation. In other words maybe we should focus on optimizing parameters as storage and training as a form of boot loading. I'm sure they're thinking of things like this now, but refining this to reduce training costs could be immensely useful.
@rasi_rawss4 ай бұрын
I'm shocked there are people that will write a check for a "data center" in space...
@RomeTWguy4 ай бұрын
The hype cycle is currently at the euphoria stage, anything that has something to do with AI can raise billions
@sitkicantoraman4 ай бұрын
Those are the grandsons of the people who bought the Golden Gate Bridge I guess :D
@Veileihi4 ай бұрын
We simply don't know how much scale is a little vs a lot in terms of intellegence. Usually a system has multiple constraining factors, and so we can gauge them relative to each other. In this case, I think its quite likely we'll need to scale a lot before the other constraining factors become clear enough to work on them meaningfully. I'm in the camp that this is the right move and will grant clarity on what to invest research into next.
@MrSchweppes4 ай бұрын
Thanks for another great analysis, Phillip!👍 I wonder, what your intuition tells you, will we see a model that deserves the name GPT-5 before 2025? I mean will we get an LLM that is a leap like from GPT-3 to GPT-4 in 2024? I'm not talking about Opus 3.5. Grok 3 will be released in early 2025; at this point, it is very hard to trust OpenAI regarding their release dates.
@geordi-gabrielrenauddumoul4494 ай бұрын
AI explained new video hype!!
@neilmcdevitt4 ай бұрын
building out compute is the best thing we can do to build the infrastructure for the future.
@RokStembergar4 ай бұрын
Wow, this whole video is a piece of art!
@aiexplained-official4 ай бұрын
Thanks Rok!
@emolamol4 ай бұрын
Speed of information coming to this channel 👏
@williamjmccartan88794 ай бұрын
Hi Phillip, probably already answered somewhere, but I am wondering why they haven't gone with underground facilities, using both the natural cooling of being underground and the availability of geothermal power? Hope you and yours are doing well, thank you again for sharing your time, work, and knowledge in these podcasts brother, peace
@etfacetimehome4 ай бұрын
So much value. Thank you so much
@Omar-bi9zn4 ай бұрын
great video ! A lot happened today, and it feels like a lot will happen within these next three months (Gemini 2, Grok 3, GPT-NEXT, Claude 3.5 Opus, etc...) It might be possible that all these will reach the same degree of intelligence (like what's happening rn with GPT-4 level intelligence) and then we have to wait 6 months for the next really newsworthy event 🤣
@anywallsocket4 ай бұрын
That puzzle took me 2 minutes 😂10:44
@peter.g64 ай бұрын
Super informative, thank you very much for this vid!
@aiexplained-official4 ай бұрын
Thanks Peter, yw
@CurtCox4 ай бұрын
10 gigawatts! Great Scott.
@ai-programming-p4v4 ай бұрын
I guess scaling will be done way of having more models and not just a big one. By that I mean learning models for specific tasks like code, language and logic and then generating data for bigger model that is then done into some kind of mixture of experts so not all parameters active for inference, but with 90% performance of full model.
@tiagotiagot4 ай бұрын
I suspect scale can be a way to bruteforce ASI; but architecture, training methods, and data selection/production would be key for getting there efficiently.
@alpha007org4 ай бұрын
From the humble beginnings in my AGI fairy tale in 2000 (not a typo), to insanely large datacenters and unimaginable power grid plans. What a time to be alive. RE: Datacenters in space. How the F will they handle heat in space? This is just WFT moment for me.
@nescaufe19914 ай бұрын
Yeah, and what the fuck do you do against solar fares, in space??
@sakshamShukla_4 ай бұрын
@@nescaufe1991 We do have a space station 400 km above earth but i don't think solar panels will be enough to power those supercomputer clusters.
@alpha007org4 ай бұрын
@@sakshamShukla_ But what about heat? And GPU failure rate? META: "The training run took place over 54 days and the cluster encountered 419 unexpected component failures during that time, averaging one failure every three hours." How the F would you manage this?
@alpha007org4 ай бұрын
Not meant as a reply to you, specifically. Just some added info.
@needlebacklessons49504 ай бұрын
It’s quite obvious that “giant datacenter in space” is just bullshit thrown out to drum up media interest and sucker in gullible VC investors.
@patruff4 ай бұрын
In the same epoch report they say that federated power consumption can be 10 gW by 2030 but I'm wondering if they would find a workaround like Leopold said "buy an aluminum smelter"
@zyzhang11304 ай бұрын
Babe wake up A Iexplained new video just dropped
@DaveShap4 ай бұрын
Bullish
@ryzikx4 ай бұрын
thought that said something else lol
@HanzDavid964 ай бұрын
The scale of llms is not the limitting factore currently, it is the quality of data.
@Arcticwhir4 ай бұрын
I mean to get valued at 5 billion in 3 months is WILD, those investors must see something really promising, if not and its just a hope on ilya. Then this bubble will pop quite hard. I have a feeling we wont be seeing an exponential or faster pace of innovation if scale really is the major solution, it will become ever more difficult