Can synthetic data unlock AI recursive self-improvement? - Mark Zuckerberg

  Рет қаралды 59,967

Dwarkesh Patel

Dwarkesh Patel

Күн бұрын

Пікірлер: 94
@FutureBusinessTech
@FutureBusinessTech 6 ай бұрын
It seems like this interviewer is asking better questions than some of the biggest tech podcasters out there. I'll have to watch more.
@bobbyc1120
@bobbyc1120 6 ай бұрын
Dwarkesh Patel is great. I can't watch Lex Fridman anymore.
@dumdum407
@dumdum407 6 ай бұрын
@@bobbyc1120 What is wrong with Lex Fridman?
@world-top0
@world-top0 6 ай бұрын
@@dumdum407lex fridman lies about MIT scientists thing and advertises himself as a intellectual , which is far from his accomplishments.
@ArtOfTheProblem
@ArtOfTheProblem 6 ай бұрын
when it comes to AI he actually has a very surface understanding, let me know if you want exact examples. But good on him for leveraging one guest into the next all the way up the chain - can't hate on that hussle@@dumdum407
@Draknos90
@Draknos90 6 ай бұрын
@@world-top0meanwhile we are watching some random guy interview zuck. ok mate
@HAZMOLZ
@HAZMOLZ 6 ай бұрын
Much more interesting listening to Zuck talk about tech and not marketing bs.
@Mt15621
@Mt15621 6 ай бұрын
Right its amazing to hear him talk about tech. He's very very intelligent and would much rather hear him talk about tech as he understands what's going on in the market today and the future of AI. He's obviously good with marketing too but that's not what got him here. At the root he is a programmer!
@achimmeyer9889
@achimmeyer9889 2 ай бұрын
sounds a bit like marketing to me. the empirical evidence is not mapping exactly to what they make it to be, IMHO.
@dataman4503
@dataman4503 6 ай бұрын
I am super sure thatn 99% of the big tech CEOs can't discuss this at this detail. Only very very very few CEOs know the tech
@timseguine2
@timseguine2 6 ай бұрын
Using synthetic data can be interpreted as a form of model smoothing. Based on a shallow analysis of currently available information it might be the case that it helps to stabilize the gradients during training.
@harshnigam3385
@harshnigam3385 3 ай бұрын
very nicely put
@rockyraccoon
@rockyraccoon 6 ай бұрын
That’s the worry but from what I understand OpenAI did some tests last year and determined simulated data can take us a lot further than we thought.
@ThreeChe
@ThreeChe 6 ай бұрын
I'm glad they tuned up Zuck's algo. He is definitely climbing out of the Uncanny Valley.
@PeterResponsible
@PeterResponsible 6 ай бұрын
😂
@Sirbikingviking
@Sirbikingviking 6 ай бұрын
Wouldn't training on too much synthetic data produce an effect similar to over fitting the data set?
@alexanderbrown-dg3sy
@alexanderbrown-dg3sy 6 ай бұрын
If you don’t mix with real data..you get model collapse. Some new research explore these real to synthetic ratios.
@bobbyc1120
@bobbyc1120 6 ай бұрын
I don't think so. It would introduce bias towards thinking like previous language models, but it wouldn't be overfitting. Overfitting is when you fit too much to a "quirky" dataset, drawing conclusions from tiny amounts of data. In this context, overfitting would be if we went over the same dataset dozens of times (dozens of epochs), until the quality of the model's outputs started to decline.
@GlennGaasland
@GlennGaasland 6 ай бұрын
@@alexanderbrown-dg3sy But will not this depend on what kind of synthetic data it is? Is there not an infinite amount of possible kinds of synthetic data, depending on the process used to create it?
@alexanderbrown-dg3sy
@alexanderbrown-dg3sy 6 ай бұрын
@@GlennGaasland I can’t remember the name of the paper off the top of my head, but the research stated and proved that synthetic data doesn’t retain the long tail of data entropy. However, you said the type of synthetic data matter, yes? The only way I see pre-training models on entirely synthetic datasets, is once we have models that can produce outputs beyond human capability because it could produce data more high-quality data than humans. I always thought and still believe that hybrid is the optimal solution, currently. But conversely, we know with gpt4 level models, now llama3-70B models 😂 we can exploit extended test time compute to emulate models with much larger capacity and produced expert level output, so cut this work in a generation scheme, potentially but you’re talking about potentially hundreds of thousands or millions of outputs for every piece potential data. If we had models that were efficient(fff network, structured sparsity…etc) and more efficient inference. Possibly this could work at scale.
@Leto2ndAtreides
@Leto2ndAtreides 6 ай бұрын
The synthetic data will still be useful data. Like, some writers complain about their books being fed to these models. But you can instead have 50 intelligent analyses of that book fed in instead, without ever feeding in the actual book. You could sharpen learning on topics before feeding it in. You can restructure the data so that questions better mirror more intelligent responses. The basic data on the internet is not organized for the needs of what LLMs need to do... So, synethetic all the way!
@eugenes9751
@eugenes9751 6 ай бұрын
1:02 "Inference generating synthetic data to then go feed into that model"..... The simplest and most efficient way of doing this is through simulation. Therefore, we're already in one of these "inference generating" simulations.
@thimo3699
@thimo3699 6 ай бұрын
oh no
@mahedirakibsonny
@mahedirakibsonny 6 ай бұрын
“Artificial Intelligence” eating “Synthetic Data”…. Interesting times.
@rodrigobarraza
@rodrigobarraza 6 ай бұрын
If you've trained models, one can tell how this doesn't really work as well as you think. In fact, it ends up introducing a lot of hallucinations to models, when being fed data that has been generated by itself/another AI. We're not anywhere close for this to really work, or at least work in a way that we as a species won't be able to really tell the difference.
@Nick-yy3oy
@Nick-yy3oy 6 ай бұрын
We don't have enough intelligence but also our data is privacy
@GodbornNoven
@GodbornNoven 6 ай бұрын
​@@rodrigobarrazathat's only if the data fed to the ai is not filtered. High quality data is high quality data.
@rodrigobarraza
@rodrigobarraza 6 ай бұрын
@@GodbornNoven Nope, I'm saying all generated data is bad. Anything under 99.9% accuracy will fall off so quickly, and current models aren't even above 90% accuracy.
@maggiejetson7904
@maggiejetson7904 6 ай бұрын
@@rodrigobarraza exactly. It causes chaos in control system and blindspots in test scripts, what makes you think AI will not have problem? I like the terminology of hallucinations as it describes well.
@Geniusderelict
@Geniusderelict 6 ай бұрын
Zuck is getting better and better in speaking the human language. Impressive!
@dr.mikeybee
@dr.mikeybee 6 ай бұрын
Wow! Great stuff! Keep in mind that synthetic data from RLAI is what's most important. LLMs have no trouble creating training data from reasoning about chat history.
@jamesdalton3082
@jamesdalton3082 6 ай бұрын
Zuckerborg's hair plugs are coming in nicely.
@DanLyndon
@DanLyndon 6 ай бұрын
The answer is no it can't, at least not in any meaningful way. You can't make a leap from inductive reasoning to abductive reasoning with better inductive reasoning.
@ShyCataclysm
@ShyCataclysm 6 ай бұрын
Inference energy cost isn't the problem, it's training, inference has been solved. Look up LPUs and Groq. And no, not X Grok but Groq.
@99dynasty
@99dynasty 6 ай бұрын
Synthetic data won’t be seen as that impressive in 10 years time. New architectures and perhaps things like agency in the world will be what makes a bigger difference
@Sai_r2d2
@Sai_r2d2 6 ай бұрын
But the main problem is generating *meaningful* synthetic data. Because bad data may lead the model to collapse.
@christopherarendt3531
@christopherarendt3531 6 ай бұрын
I would assume so, since the data they train on is curated and that is the same idea (not synonymous) as reorganized data
@silverbullet3939
@silverbullet3939 6 ай бұрын
What's possible is to fine tune using synth data. At some point, your accuracy won't improve at which point you need more real data.
@mahavakyas002
@mahavakyas002 6 ай бұрын
Sam Altman next?
@G0llwi
@G0llwi 6 ай бұрын
is there somewhere the whole interview?
@BoldJonathan
@BoldJonathan 6 ай бұрын
I enjoyed this. This self-directed evolution must be baked in future modes to perform a kind of artificial selection. Which can lead to very open waters. Again amped up about seed improver architecture but maybe it’s the Brooklynite in me, I do not want RSI to reach parity. I don’t. Not trying to be cynical here. Love tech. Just have concerns here.
@BrianMosleyUK
@BrianMosleyUK 6 ай бұрын
Nice closing thought Dwarkesh... We are definitely in uncharted territory in terms of geopolitics and AI.
@anywallsocket
@anywallsocket 6 ай бұрын
If you’re a smart CEO you’ll tell people synthetic data is the way to go, simply because it definitely isn’t
@carbon-structure
@carbon-structure 6 ай бұрын
Yeah that was a surprising creative leap
@szebike
@szebike 6 ай бұрын
Though model collapse can happen too with too much synthetic data in the trainingloop.
@myahiaoui
@myahiaoui 6 ай бұрын
Just like humans need both imagination and real-world experience to thrive, the ideal scenario for LLM development likely involves a combination of both synthetic and real-world data
@yubtubtime
@yubtubtime 6 ай бұрын
You know you're a tool when you make Zuck seem human and reasonable
@QuantPhilosopher89
@QuantPhilosopher89 6 ай бұрын
One of the guys in the last episode claimed that models were under-parameterized...
@SP-ny1fk
@SP-ny1fk 6 ай бұрын
At the expense of users, of course.
@leothelion634
@leothelion634 6 ай бұрын
Synthetic worlds next, 3D simulations of life like the Sims
@cacogenicist
@cacogenicist 6 ай бұрын
You know where there's plenty of data? The actual universe. POV, smart glasses multimodal data would be useful, I would think. Domestic humanoid robots, eventually. Associate morphemes wirh bundles of sensory data at a deep level.
@Sprngm
@Sprngm 6 ай бұрын
„These models… They just wanna learn“ - Ilya Sutskever
@Godines16
@Godines16 6 ай бұрын
Feel the AGI…
@Leto2ndAtreides
@Leto2ndAtreides 6 ай бұрын
Fearing China seeing the models assumes that they don't have more AI researchers than we do, and that they aren't training more. The thing with Chinese government is that at least for things that are obviously important, they are perfectly capable of dedicating oceans of resources on their own. The working assumption should be that they surpass us in AI, no matter what we do. And that it then becomes a question of "Do we have access to their models?" Which, in an unfriendly environment, we might not.
@theacid1
@theacid1 6 ай бұрын
Chinas is always immitating not innovating
@mahavakyas002
@mahavakyas002 6 ай бұрын
The key factor here is compute which China simply doesn't have access to (due to embargos etc.). If they can crack that, then the US is in for a big shock. Until then, the US will have the lead.
@greenbeans7573
@greenbeans7573 6 ай бұрын
bruh, if you think China's models will ever surpass America's you're out of the loop.
@RakibHasan-ee2cd
@RakibHasan-ee2cd 6 ай бұрын
That is absolute rubbish. ​@@mahavakyas002
@RakibHasan-ee2cd
@RakibHasan-ee2cd 6 ай бұрын
​@@greenbeans7573really? 😂😂😂
@seth8141
@seth8141 6 ай бұрын
You're just engineering the outcome you want at that point. I'm not sure how this wouldn't introduce bias.
@knifefest
@knifefest 6 ай бұрын
You can take a million monkeys and try all kinds of methods to teach them, but they still won't have the brain of a human. Human brains aren't substantially larger. The gap in sophistication between human and primate brains can't really be described by the difference in stimuli alone. It's a difference in the structure of the brains that allows us to more effectively chunk our understanding of stimuli and self-correct. It's a change in architecture that's going to bridge the gap with AI. More efficient ways to model the brain as opposed to more efficient GPU architectures.
@Perfect-24
@Perfect-24 6 ай бұрын
Science by the perfect wrong way
@joeporter4920
@joeporter4920 6 ай бұрын
training on synthetic data will have the same results that inbreeding does
@brokelaowaiinchina
@brokelaowaiinchina 6 ай бұрын
😂
@jonnylukejs
@jonnylukejs 6 ай бұрын
THATS MY WORK I WILL GO NUTS ZUCK
@soumiksarkar4161
@soumiksarkar4161 6 ай бұрын
So basically he doesn’t know.
@OMGanger
@OMGanger 6 ай бұрын
Do you??
@Draknos90
@Draknos90 6 ай бұрын
Can there be any more ads in this ? The guy is a complete shill
@raynash4748
@raynash4748 6 ай бұрын
Whatever Mark says....do the opposite.
@user-user-user-user.
@user-user-user-user. 6 ай бұрын
Love how Zuckerberg still finds ways to “meta” everything; words, ideas; nonsense.
@AdamBrusselback
@AdamBrusselback 6 ай бұрын
I get the feeling, but I didn't find it to be out of context at all.
@StealthGT40
@StealthGT40 6 ай бұрын
He runs a shareholder company so everything he says about meta has to be carefully worded itherwise he'd get in trouble with the SEC
@alanyao
@alanyao 6 ай бұрын
@yosup125
@yosup125 6 ай бұрын
for the algo
@user-jk9zr3sc5h
@user-jk9zr3sc5h 6 ай бұрын
The guy was talking about "Distillation" when talking about a model outputting data and using it for training.
@tankieslayer6927
@tankieslayer6927 2 ай бұрын
Training a model on the synthetic data it generates does not change the distribution. This field is full of midwits using investor money.
Meet The New Mark Zuckerberg | The Circuit
24:02
Bloomberg Originals
Рет қаралды 2,2 МЛН
I tricked MrBeast into giving me his channel
00:58
Jesser
Рет қаралды 27 МЛН
Family Love #funny #sigma
00:16
CRAZY GREAPA
Рет қаралды 31 МЛН
CAN YOU DO THIS ?
00:23
STORROR
Рет қаралды 47 МЛН
How Strong is Tin Foil? 💪
00:25
Brianna
Рет қаралды 60 МЛН
What is Synthetic Data? No, It's Not "Fake" Data
6:49
IBM Technology
Рет қаралды 37 М.
AI Self Improvement - Computerphile
11:21
Computerphile
Рет қаралды 424 М.
Mark Zuckerberg - Llama 3, $10B Models, Caesar Augustus, & 1 GW Datacenters
1:18:38
What is Synthetic Data and how to use it in your project
1:06:53
PyCharm, a JetBrains IDE
Рет қаралды 5 М.
«Осень». Самая большая загадка Windows XP
14:36
Девять десятых
Рет қаралды 844 М.