I feel Agi will be invented in the year of the Linux desktop
@andybrice27119 ай бұрын
In order to power AGI, we need fusion reactors. And in order to design fusion reactors, we need AGI.
@attashemk89859 ай бұрын
@@andybrice2711 and for control fusion reactor we will use Linux desktop
@Brahvim9 ай бұрын
@@attashemk8985 I hope that runs Debian or something ROCK SOLID. Nobody wants another `xz-utils` incident.
@martinzderadicka82809 ай бұрын
@@andybrice2711 Nah, just use all resources for power plants and let the population plummet.
@drayg0n8069 ай бұрын
Video is good. Just one question, where is Iliya?
@Dina_tankar_mina_ord9 ай бұрын
They put a cap on him for sure. Since october last year something happned with their approach to ai that caused panic.
@bornach9 ай бұрын
Free Ilya!
@ryzikx9 ай бұрын
on the back of a milk carton
@ultrasound14599 ай бұрын
He is being kept hostage by Sam until he create AGI
@poshsagar9 ай бұрын
Ilya is so dead
@kevbuh9 ай бұрын
More paper reviews please
@guillaumevermeillesanchezm24279 ай бұрын
Is it Monday AGAIN????
@clray1239 ай бұрын
The "AI market crash" is going to look pretty funny.
@strafidamo97039 ай бұрын
I love ML News
@hblomqvist9 ай бұрын
OpenAI's definition of AGI is different from that of academia. In other words, OpenAI "AGI" is a marketing term and nothing else.
@yurcchello9 ай бұрын
"open"AI also only marketing term
@lievenvv9 ай бұрын
I don't think academia has consensus on what AGI is, or how to measure it
@ItIsJan9 ай бұрын
I dont think academia even has consensus on what "intelligence" is in the first place
@andyt13139 ай бұрын
Love your no nonsense updates.
@MarcAyouni9 ай бұрын
😂 That last sentence is a killer !
@andybrice27119 ай бұрын
I don't get this obsession with AGI being the most important goal. Models which excel in specific tasks could be more revolutionary than models which replicate human-like intelligence.
@mitchdg53039 ай бұрын
if only we could automate the humans which make narrow ai's
@rumfordc9 ай бұрын
Humans are *_desperate_* to relieve themselves of responsibility. They want to repeat what they're told but don't want to be blamed for being wrong. They know they can't claim machines are responsible for their own decisions, because machines aren't alive. AGI is these people's mental ticket back into Fantasy Land. They think they'll finally be able to have their cake and eat it too.
@stuartspence99219 ай бұрын
AGI creates the models you've described. All of them.
@rumfordc9 ай бұрын
humans are obsessed with relieving themselves of responsibility. they want to repeat what they're told, but not be blamed for when its wrong. they know they can't blame machines for decisions in their current state. AGI is their ticket back into fantasy land.
@Ivan.Wright9 ай бұрын
@@stuartspence9921 Look up the "wenger 16999". Sure it has all the tools, but it's just not a practical tool to actually use.
@sofia.eris.bauhaus9 ай бұрын
hell yeah, monday comes early this week 😎.
@naromsky9 ай бұрын
Artificial yottabyte learning intelligence (AYLI)
@eoghanf9 ай бұрын
Very good. Very good. 😀
@Brahvim9 ай бұрын
Nice to see how it's an anagram for Ilya Sutskever's name.
@unimposings9 ай бұрын
Did you checked Quibic already?
@GeneralKenobi694209 ай бұрын
Low end of the IQ curve: "Predicting the next word is all you need" Middle end: "noooo, it's just a fancy auto complete, that's not how the brain works, AGI is impossible, a lot more research is needed" High end: "Predicting the next word is all you need"
@clray1239 ай бұрын
The catch is that the next word prediction needs to be correct based on information which is (1) not in the training data and (2) not in the prompt. GOOD LUCK, little AI!
@travian8219 ай бұрын
@@clray123 pretty sure that an AI that searches for answers in the internet before making a complete answer is already there, is just the think that we silly humans do to make most of our tasks, that an some
@clray1239 ай бұрын
@@travian821 Not really because it does not scale, and the main problem of the AI is that it has to generate the next token in (more or less) constant time. There is no AI which has a "pondering loop" inside. What is there is AI that is generating "function calls" or software which calls AI in a loop multiple times (possible feeding it data from outside) to generate an improved result. But if we need such external software, and the logic hard-coded in it, what does it tell you about the true capability of the "AI"? Language models are not Turing-complete, meaning that they cannot even execute common "easy" algorithms with a sensible amount of resources. So a better way to think about it is that we currently have "efficient cloning of text/image/audio, driven by a prompt", something like a fuzzy database, into which you send queries using natural language and receive fast responses. But the fast responses are only as good as what's already inside the db; and absolutely no "reasoning" (as in iterative planning) is involved.
@drdca82639 ай бұрын
@@clray123I think “correct based on information which [...]” and “information not in [...]” could both use a bit of clarification, though in somewhat different ways. To elaborate: For the second thing, it seems a little unclear what it means to say some information is in or isn’t in some dataset. If there’s a random variable which is sampled from some distribution, independent from the sampling of the dataset from the process that produced it, then, it seems like the information of “what is the value of that random variable” is clearly not “in the dataset”. To clarify, I mean this in the sense that, if the distribution the variable is sampled from is deterministic, then the amount of information that the value of the variable constitutes, is zero. This is a rather restrictive condition though, I think, and I don’t think it is exactly what you mean? For the first thing: do you mean like, the criteria for the answer being correct depends on this other information?
@nebiyuyouhannes60479 ай бұрын
helllo from ethiopia
@clray1239 ай бұрын
how many bazillion quadrillion flops do you have down there
@JF-vt4ve9 ай бұрын
Fantastic. So ungewohnt. So many ML news 😊
@SBalajii9 ай бұрын
Yannic spitting truth around 6:00
@sydneyfong9 ай бұрын
... wait, this is old news from last month! It's only been 3 weeks but I swear it feels like last year...
@pensiveintrovert43189 ай бұрын
It has been named Deep Thought.
@zyzzyva3039 ай бұрын
I N T E L L I G E N C E
@xviii57809 ай бұрын
I pray that the AI that manages to make society collapse will be named "Deep Thought"
@clray1239 ай бұрын
I believe the bigger problem with Microsoft is not their dirty paws on AI (i.e. today's universal photocopier / data faker), but their rising dominance in the (enterprise) identity management. Imagine being a company which de facto has access to any data of any* other company on the globe because you can impersonate any employee in there (*except for companies that don't/are forbidden to use Microsoft's IdM, e.g. like in China). This is what Microsoft is increasingly capable of today and other companies, including IT companies that service non-IT companies providing critical infrastructure, are falling for it left and right and outsourcing their identity management. So instead of employee X proving their identity to your company server Y, it is Microsoft's server Z claiming that they have verified employee X's identity. This should be a huge issue in security, but nobody seems to care.
@hblomqvist9 ай бұрын
Stargate will be the largest (in number of parameters) based on combinations of ANI, that will be marked as AGI. MS betting the bank on that they will win the AI race. And as always, when it comes to MS as a company, non of the IPs will be created with the walls of MS. So they are trying to win the race with others (90% sweet and 10% perspiration) just by throwing money on it.
@clray1239 ай бұрын
As with Windows, they are going to fail and fall flat on their ass, but it's kinda unsettling given that they and their pals are now making a sizeable chunk of S&P 500's. Meaning that unsuspecting grandmas and the like with their retirement savings accounts will soon have to bleed for the unlimited corporate greed and power hunger of those few people.
@LeetOneGames9 ай бұрын
I guess a text-to-audiovideo model will win the (latent) space race
@andreasmoyseos59809 ай бұрын
Why do we assume that Microsoft is naive enough to invest so much money in openai only for openai to turn around and declare 'AGI!'?
@СоюзниксОкинавы9 ай бұрын
Yes.
@tomoki-v6o9 ай бұрын
Sora good for music clips
@EdNarculus9 ай бұрын
well all my models all take like forty zillion bajillion, so there
@ChairmanHehe9 ай бұрын
30 BILLION QUADRILLION
@bornach9 ай бұрын
They must have a pool of sharks with "LASERS"
@Hexanitrobenzene9 ай бұрын
Hm, that's 3*10^25 .
@vassil419 ай бұрын
a $100B data center IS AGI
@derjansan95649 ай бұрын
Long Cray stocks!
@erickmarin61479 ай бұрын
Odd to be verified like that
@clray1239 ай бұрын
Maybe Elon had nothing to do and thought to himself "yep, he looks like that guy from KZbin" and pressed a button.
@zerotwo73199 ай бұрын
Can we expect one quadrillion likes?
@jameshughes30149 ай бұрын
I think Emad is right, there's no way stability can compete with the other for profit companies, unless they go full on free and open source. That would give companies that rely on art and music made by starving artists (most digital artists) a reason to continue propping them up with funding. Game companies, hollywood.. they all benefit from and very much need cheap labor making their digital assets. it's why so many of those big companies bankroll blender. Free tools means cheaper labor, and that more people can learn to master those tools. But if a company has to pay to use a program, they'll pay for the biggest available companies product, that has the best tech support and legal team. That's gonna be microsoft. I think this could well be the end of stability ai.
@Idiomatick9 ай бұрын
They said billion quadrillion because they didn't want people to giggle at 'sextillion'
@zyzzyva3039 ай бұрын
It will be worth it if they use it to build an actual Stargate. Otherwise, I'm not convinced.
@KolTregaskes9 ай бұрын
Yannic, *this* is Monday. ;-p
@Wobbothe3rd9 ай бұрын
Tokens dont burr, engines do.
@valdisgerasymiak14039 ай бұрын
What did Ilya see? What the f*k did Ilya see???
@BooleanDisorder9 ай бұрын
So that computer would be what, 500k times better than the one that trained GPT-3. lol
@daan32989 ай бұрын
Who is Ilya Galt? Eh... John Sutskever? Eh... Ilya Sutskever? Yeah, where is Ilya Sutskever???
@alan2here9 ай бұрын
Septillions of (16 bit?) flops? :/ 🤔 probably not then.
@kaaditya19 ай бұрын
30 billion quadrillion? Wtf is even that?
@otrqffaimajg9 ай бұрын
DOES THIS GUY WANTS ME TO WEAR SUNGLASSES TO WATCH HIS VIDEO OR WHAT!!!
@makhalid19999 ай бұрын
Only 2 views in 12 seconds? Yannic's reign is over 😞
@erickmarin61479 ай бұрын
Was expecting to model views(second)=second³ {0
@mriz9 ай бұрын
How much you can extrapolate to 12 hours with that prior?