It seems like this interviewer is asking better questions than some of the biggest tech podcasters out there. I'll have to watch more.
@bobbyc11206 ай бұрын
Dwarkesh Patel is great. I can't watch Lex Fridman anymore.
@dumdum4076 ай бұрын
@@bobbyc1120 What is wrong with Lex Fridman?
@world-top06 ай бұрын
@@dumdum407lex fridman lies about MIT scientists thing and advertises himself as a intellectual , which is far from his accomplishments.
@ArtOfTheProblem6 ай бұрын
when it comes to AI he actually has a very surface understanding, let me know if you want exact examples. But good on him for leveraging one guest into the next all the way up the chain - can't hate on that hussle@@dumdum407
@Draknos906 ай бұрын
@@world-top0meanwhile we are watching some random guy interview zuck. ok mate
@HAZMOLZ6 ай бұрын
Much more interesting listening to Zuck talk about tech and not marketing bs.
@Mt156216 ай бұрын
Right its amazing to hear him talk about tech. He's very very intelligent and would much rather hear him talk about tech as he understands what's going on in the market today and the future of AI. He's obviously good with marketing too but that's not what got him here. At the root he is a programmer!
@achimmeyer98892 ай бұрын
sounds a bit like marketing to me. the empirical evidence is not mapping exactly to what they make it to be, IMHO.
@dataman45036 ай бұрын
I am super sure thatn 99% of the big tech CEOs can't discuss this at this detail. Only very very very few CEOs know the tech
@timseguine26 ай бұрын
Using synthetic data can be interpreted as a form of model smoothing. Based on a shallow analysis of currently available information it might be the case that it helps to stabilize the gradients during training.
@harshnigam33853 ай бұрын
very nicely put
@rockyraccoon6 ай бұрын
That’s the worry but from what I understand OpenAI did some tests last year and determined simulated data can take us a lot further than we thought.
@ThreeChe6 ай бұрын
I'm glad they tuned up Zuck's algo. He is definitely climbing out of the Uncanny Valley.
@PeterResponsible6 ай бұрын
😂
@Sirbikingviking6 ай бұрын
Wouldn't training on too much synthetic data produce an effect similar to over fitting the data set?
@alexanderbrown-dg3sy6 ай бұрын
If you don’t mix with real data..you get model collapse. Some new research explore these real to synthetic ratios.
@bobbyc11206 ай бұрын
I don't think so. It would introduce bias towards thinking like previous language models, but it wouldn't be overfitting. Overfitting is when you fit too much to a "quirky" dataset, drawing conclusions from tiny amounts of data. In this context, overfitting would be if we went over the same dataset dozens of times (dozens of epochs), until the quality of the model's outputs started to decline.
@GlennGaasland6 ай бұрын
@@alexanderbrown-dg3sy But will not this depend on what kind of synthetic data it is? Is there not an infinite amount of possible kinds of synthetic data, depending on the process used to create it?
@alexanderbrown-dg3sy6 ай бұрын
@@GlennGaasland I can’t remember the name of the paper off the top of my head, but the research stated and proved that synthetic data doesn’t retain the long tail of data entropy. However, you said the type of synthetic data matter, yes? The only way I see pre-training models on entirely synthetic datasets, is once we have models that can produce outputs beyond human capability because it could produce data more high-quality data than humans. I always thought and still believe that hybrid is the optimal solution, currently. But conversely, we know with gpt4 level models, now llama3-70B models 😂 we can exploit extended test time compute to emulate models with much larger capacity and produced expert level output, so cut this work in a generation scheme, potentially but you’re talking about potentially hundreds of thousands or millions of outputs for every piece potential data. If we had models that were efficient(fff network, structured sparsity…etc) and more efficient inference. Possibly this could work at scale.
@Leto2ndAtreides6 ай бұрын
The synthetic data will still be useful data. Like, some writers complain about their books being fed to these models. But you can instead have 50 intelligent analyses of that book fed in instead, without ever feeding in the actual book. You could sharpen learning on topics before feeding it in. You can restructure the data so that questions better mirror more intelligent responses. The basic data on the internet is not organized for the needs of what LLMs need to do... So, synethetic all the way!
@eugenes97516 ай бұрын
1:02 "Inference generating synthetic data to then go feed into that model"..... The simplest and most efficient way of doing this is through simulation. Therefore, we're already in one of these "inference generating" simulations.
If you've trained models, one can tell how this doesn't really work as well as you think. In fact, it ends up introducing a lot of hallucinations to models, when being fed data that has been generated by itself/another AI. We're not anywhere close for this to really work, or at least work in a way that we as a species won't be able to really tell the difference.
@Nick-yy3oy6 ай бұрын
We don't have enough intelligence but also our data is privacy
@GodbornNoven6 ай бұрын
@@rodrigobarrazathat's only if the data fed to the ai is not filtered. High quality data is high quality data.
@rodrigobarraza6 ай бұрын
@@GodbornNoven Nope, I'm saying all generated data is bad. Anything under 99.9% accuracy will fall off so quickly, and current models aren't even above 90% accuracy.
@maggiejetson79046 ай бұрын
@@rodrigobarraza exactly. It causes chaos in control system and blindspots in test scripts, what makes you think AI will not have problem? I like the terminology of hallucinations as it describes well.
@Geniusderelict6 ай бұрын
Zuck is getting better and better in speaking the human language. Impressive!
@dr.mikeybee6 ай бұрын
Wow! Great stuff! Keep in mind that synthetic data from RLAI is what's most important. LLMs have no trouble creating training data from reasoning about chat history.
@jamesdalton30826 ай бұрын
Zuckerborg's hair plugs are coming in nicely.
@DanLyndon6 ай бұрын
The answer is no it can't, at least not in any meaningful way. You can't make a leap from inductive reasoning to abductive reasoning with better inductive reasoning.
@ShyCataclysm6 ай бұрын
Inference energy cost isn't the problem, it's training, inference has been solved. Look up LPUs and Groq. And no, not X Grok but Groq.
@99dynasty6 ай бұрын
Synthetic data won’t be seen as that impressive in 10 years time. New architectures and perhaps things like agency in the world will be what makes a bigger difference
@Sai_r2d26 ай бұрын
But the main problem is generating *meaningful* synthetic data. Because bad data may lead the model to collapse.
@christopherarendt35316 ай бұрын
I would assume so, since the data they train on is curated and that is the same idea (not synonymous) as reorganized data
@silverbullet39396 ай бұрын
What's possible is to fine tune using synth data. At some point, your accuracy won't improve at which point you need more real data.
@mahavakyas0026 ай бұрын
Sam Altman next?
@G0llwi6 ай бұрын
is there somewhere the whole interview?
@BoldJonathan6 ай бұрын
I enjoyed this. This self-directed evolution must be baked in future modes to perform a kind of artificial selection. Which can lead to very open waters. Again amped up about seed improver architecture but maybe it’s the Brooklynite in me, I do not want RSI to reach parity. I don’t. Not trying to be cynical here. Love tech. Just have concerns here.
@BrianMosleyUK6 ай бұрын
Nice closing thought Dwarkesh... We are definitely in uncharted territory in terms of geopolitics and AI.
@anywallsocket6 ай бұрын
If you’re a smart CEO you’ll tell people synthetic data is the way to go, simply because it definitely isn’t
@carbon-structure6 ай бұрын
Yeah that was a surprising creative leap
@szebike6 ай бұрын
Though model collapse can happen too with too much synthetic data in the trainingloop.
@myahiaoui6 ай бұрын
Just like humans need both imagination and real-world experience to thrive, the ideal scenario for LLM development likely involves a combination of both synthetic and real-world data
@yubtubtime6 ай бұрын
You know you're a tool when you make Zuck seem human and reasonable
@QuantPhilosopher896 ай бұрын
One of the guys in the last episode claimed that models were under-parameterized...
@SP-ny1fk6 ай бұрын
At the expense of users, of course.
@leothelion6346 ай бұрын
Synthetic worlds next, 3D simulations of life like the Sims
@cacogenicist6 ай бұрын
You know where there's plenty of data? The actual universe. POV, smart glasses multimodal data would be useful, I would think. Domestic humanoid robots, eventually. Associate morphemes wirh bundles of sensory data at a deep level.
@Sprngm6 ай бұрын
„These models… They just wanna learn“ - Ilya Sutskever
@Godines166 ай бұрын
Feel the AGI…
@Leto2ndAtreides6 ай бұрын
Fearing China seeing the models assumes that they don't have more AI researchers than we do, and that they aren't training more. The thing with Chinese government is that at least for things that are obviously important, they are perfectly capable of dedicating oceans of resources on their own. The working assumption should be that they surpass us in AI, no matter what we do. And that it then becomes a question of "Do we have access to their models?" Which, in an unfriendly environment, we might not.
@theacid16 ай бұрын
Chinas is always immitating not innovating
@mahavakyas0026 ай бұрын
The key factor here is compute which China simply doesn't have access to (due to embargos etc.). If they can crack that, then the US is in for a big shock. Until then, the US will have the lead.
@greenbeans75736 ай бұрын
bruh, if you think China's models will ever surpass America's you're out of the loop.
@RakibHasan-ee2cd6 ай бұрын
That is absolute rubbish. @@mahavakyas002
@RakibHasan-ee2cd6 ай бұрын
@@greenbeans7573really? 😂😂😂
@seth81416 ай бұрын
You're just engineering the outcome you want at that point. I'm not sure how this wouldn't introduce bias.
@knifefest6 ай бұрын
You can take a million monkeys and try all kinds of methods to teach them, but they still won't have the brain of a human. Human brains aren't substantially larger. The gap in sophistication between human and primate brains can't really be described by the difference in stimuli alone. It's a difference in the structure of the brains that allows us to more effectively chunk our understanding of stimuli and self-correct. It's a change in architecture that's going to bridge the gap with AI. More efficient ways to model the brain as opposed to more efficient GPU architectures.
@Perfect-246 ай бұрын
Science by the perfect wrong way
@joeporter49206 ай бұрын
training on synthetic data will have the same results that inbreeding does
@brokelaowaiinchina6 ай бұрын
😂
@jonnylukejs6 ай бұрын
THATS MY WORK I WILL GO NUTS ZUCK
@soumiksarkar41616 ай бұрын
So basically he doesn’t know.
@OMGanger6 ай бұрын
Do you??
@Draknos906 ай бұрын
Can there be any more ads in this ? The guy is a complete shill
@raynash47486 ай бұрын
Whatever Mark says....do the opposite.
@user-user-user-user.6 ай бұрын
Love how Zuckerberg still finds ways to “meta” everything; words, ideas; nonsense.
@AdamBrusselback6 ай бұрын
I get the feeling, but I didn't find it to be out of context at all.
@StealthGT406 ай бұрын
He runs a shareholder company so everything he says about meta has to be carefully worded itherwise he'd get in trouble with the SEC
@alanyao6 ай бұрын
?
@yosup1256 ай бұрын
for the algo
@user-jk9zr3sc5h6 ай бұрын
The guy was talking about "Distillation" when talking about a model outputting data and using it for training.
@tankieslayer69272 ай бұрын
Training a model on the synthetic data it generates does not change the distribution. This field is full of midwits using investor money.