Learning at test time in LLMs

  Рет қаралды 24,934

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

Пікірлер: 107
@MachineLearningStreetTalk
@MachineLearningStreetTalk Ай бұрын
SLIDES SO YOU CAN FOLLOW ALONG DURING THE TALK: www.dropbox.com/scl/fi/sys3iasc63lgj8lm5t0ld/JONAS_SLIDES.pdf?rlkey=ak6ir61a2pyhrfuwyvgrdvq66&st=9cloopv9&dl=0
@KevinKreger
@KevinKreger Ай бұрын
@@MachineLearningStreetTalk 🤩 thanks!
@alexandermoody1946
@alexandermoody1946 Ай бұрын
A very keen and interested young man that clearly expressed his passion at providing a solution. I really hope he continues to express willingness to share his thoughts and beliefs with others and that he fulfills the potential that he shows so fluently.
@zandrrlife
@zandrrlife Ай бұрын
😂 I was literally geeking out about this paper two weeks ago. Entropy-based everything is clearly the answer. Data selection, pretraining sampling, fine-tuning, inference…everything should be driven by entropy, because in real-life there are no gold labels, only Best guesses based on evidence and accepted axioms.
@memegazer
@memegazer Ай бұрын
Color me shocked that embedding a model in a closer to real time than training and then freezing that state was an effective way to for that model to learn better I mean it is cool that this approach is being taken serious and being actifely researched, but within the circles of debating AI sentience, consciousnes, reasoning etc. This is not a novel point of speculation about the difference between machine intelligence and organic intelligence. I have been suggesting that during training for a model this analgous to the continuous experience of organic agenthood. I don't find it surprising that there is something like a hyperwebster scaling pattern that seems to emerage at a more "inuitive" level.
@warpdrive9229
@warpdrive9229 Ай бұрын
@@memegazer Nicely put!
@luke.perkin.online
@luke.perkin.online Ай бұрын
​@@memegazer is it a fair summation that it's attempting to solve extrapolation, generalisation, abstract reasoning, with more of the same, more interpolation?
@henrismith7472
@henrismith7472 Ай бұрын
What about the chips as well? Oh yeah thermodynamic-computation is becoming a thing.
@jonfe
@jonfe Ай бұрын
Why do you think is not entropy-based right now? entropy is tied to the process of prediction, even if its not explicitly named.
@PhotoninDark
@PhotoninDark Ай бұрын
Local and Context based Learning is one area where more and more people should work. It may solve the huge energy-need problem to run LLMs and can have various applications from Query Retrieval in DBMS to Edge Learning devices.
@smicha15
@smicha15 Ай бұрын
Oh man, my eyes lit up when I saw that oil and paint magnetic ferrofluid animation! I literally saved that exact videoclip from KZbin a few months ago because it was just sooooo 4k, awesome!!!
@steve_jabz
@steve_jabz Ай бұрын
Yesss I was waiting for this. Specifically looked for it in the channel last night and realized the paper only came out a week ago
@PhotoninDark
@PhotoninDark Ай бұрын
Superb presentation but it would be better to show the slides while the presenter is talking may be as a separate window.
@parisfrancepp
@parisfrancepp Ай бұрын
Do agree, we need the slides, we don’t need to watch the guy. Or maybe putting the guy on a reduced window on a corner would be better
@SapienSpace
@SapienSpace Ай бұрын
Warping the state space to experience, such as by K-means clustering, is how knowledge is applied efficaciously ("system 2" thinking). Interesting talk, thank you for sharing.
@Charles-Darwin
@Charles-Darwin Ай бұрын
This TTT is very exciting. Only a matter of time (or coincidence) the the OS community achieves as good capability as internal OAI. Were you able to see the robotics they've been working on, the anymal bot? I like their technical video updates they put out, but always wish there was an accompanying interview.
@user-wr4yl7tx3w
@user-wr4yl7tx3w Ай бұрын
Wow, the time stamps with info are really helpful.
@superfliping
@superfliping Ай бұрын
This is connected to the power consumption of AGI. Scaling laws of progress will be based on electricity flowing through systems. Limited power is the main factor now
@SLAM2977
@SLAM2977 Ай бұрын
Top material as usual.
@kevalan1042
@kevalan1042 Ай бұрын
where can I learn more about test time training? I am not familiar with this concept
@aiamfree
@aiamfree Ай бұрын
Some of the techniques he's talking about I've tried even on very small data and have seen some interesting results!
@lestode4816
@lestode4816 Ай бұрын
Nice complements to my recent active learning course :)
@AlexKen-zv8mm
@AlexKen-zv8mm Ай бұрын
can this be used with entropix sampler ?
@user-wr4yl7tx3w
@user-wr4yl7tx3w 27 күн бұрын
That quote from Vapnik sounds like engineering at best and hacking at worst.
@SinanWP
@SinanWP 27 күн бұрын
good we are getting at the person of interest level soon :)
@BeTheFeatureNotTheBug
@BeTheFeatureNotTheBug Ай бұрын
When he says that the learned information becomes part of its beliefs, technically how does that work? I’m really stuck on how providing new data “gets into” the model.
@user-wr4yl7tx3w
@user-wr4yl7tx3w Ай бұрын
Does that mean it can be unrelated and non-redundant data for local training? It’s hard to see how that can help with inference. If not then, are you not imposing an inductive prior by your selection of data? And your bet effectively becomes the inductive prior.
@randylefebvre3151
@randylefebvre3151 Ай бұрын
Now this is scientific research!
@luke.perkin.online
@luke.perkin.online Ай бұрын
Is the answer to solve extrapolation and generalisation, interpolate bettter? Seems like this will get us to GenAI saturation faster, not take us beyond?
@MachineLearningStreetTalk
@MachineLearningStreetTalk Ай бұрын
not about extrapolation / interpolation, so much about local vs global
@luke.perkin.online
@luke.perkin.online Ай бұрын
@MachineLearningStreetTalk local manifold interpolation rather than global manifold interpolation? I think I must be missing something?
@flyingapple7119
@flyingapple7119 Ай бұрын
It's konda hard to follow when we don't see what he's talking about. Show the slides instead of the presenter. 🙏
@diga4696
@diga4696 Ай бұрын
Such a cool atmosphere!!
@KevinKreger
@KevinKreger Ай бұрын
Working on ARC, Francois gave us some hints about this approach in his University tour. Now he's left Google to work on ....? Something that we learned in ARC-AGI Challenge we hope!!!
@JGLambourne
@JGLambourne Ай бұрын
It's not going to be easy to aggregate multiple queries into a batch and run inference with this technique. The TTT would need different fine tuning data for each query. It might be very good for small models running on personal desktop machines.
@johannes523
@johannes523 Ай бұрын
I'm feeling the AGI while watching this...
@drdca8263
@drdca8263 Ай бұрын
Really? Why? I’m not asking as an “AGI can’t happen” kind of thing; I don’t see any fundamental limit to prevent it. But I don’t see why these results, as neat as they are, would make you feel that way. Edit: Ah, were you responding to like 28:08 in particular ? About selecting the most informative data in order to inform belief about the situation at hand?
@macchiato_1881
@macchiato_1881 Ай бұрын
@@drdca8263 he's probably new-ish to the machine learning sphere. I was once like that in the past everytime a new technique or algorithm reported 'performance improvement'.
@johannes523
@johannes523 Ай бұрын
@@macchiato_1881 nah, I've been around since 2016, if that counts as long to you
@johannes523
@johannes523 Ай бұрын
@@drdca8263 Yeah I think similar methods will likely play a large role in further progress towards more general agents
@icriou
@icriou 23 күн бұрын
the video quality is amazing, thx, but plz make the slide first class citizen for this kind of talk. what's the point of 99% speaker headshot?
@shuikuan
@shuikuan Ай бұрын
At 00:15 this is Geneva not Zurich 😂😂
@pedrogorilla483
@pedrogorilla483 Ай бұрын
Same city, one is written in French the other in German 🫠
@shuikuan
@shuikuan Ай бұрын
@pedrogorilla483 are you joking ? Not sure what you mean?
@user-wr4yl7tx3w
@user-wr4yl7tx3w Ай бұрын
How much is it really about just teaching what it needs to know at the last minute so that it can perform well on the expected question? Are we not simply engineering an answer given that if the local training is done on unrelated data then the result will be less good.
@LostInTheRush
@LostInTheRush Ай бұрын
I can't concentrate on what this dude is saying because he's so annoyingly handsome, jesus christ.
@mojitoism
@mojitoism Ай бұрын
Historically, betting against technology advancement, just you couldnt imaging how exactly it could work, was always a bad idea
@deter3
@deter3 Ай бұрын
two serious problems for this paper 1. how do you know using embedding Nearest Neighbors can find the nearest data for training , embedding is not the right measurement especially for complex multiple dimensional similarity requirement 2. what's the business application scenarios , this is method is complicated and most of business application has pretty stabilized tasks , why I need to hold a huge distributed index for highly flexible tasks ? I do not see much high potential from this paper .
@AlexKen-zv8mm
@AlexKen-zv8mm Ай бұрын
eli5 please
@zzz_ttt_0091
@zzz_ttt_0091 Ай бұрын
I am Compute poor
@SapienSpace
@SapienSpace Ай бұрын
Me too, however, about 1Gigabyte of DNA is all the sperm needed to swim to the egg and build a "brain" and "dumb animals" like deer are superior to humans by learning to walk faster in hours instead of months.
@cybermeth_
@cybermeth_ Ай бұрын
There is no wall
@adamkadmon6339
@adamkadmon6339 Ай бұрын
Machine learning used to be about maths. Now it is about hardware, hype, opinions, and moving big software blocks around. A generation has come into being that lacks the basic ability to make the kinds of jumps in theory made in the 1980s and 1990s.
@KevinKreger
@KevinKreger Ай бұрын
OK boomer
@RickySupriyadi
@RickySupriyadi Ай бұрын
i disagree, the scaling does emerge something.
@technokicksyourass
@technokicksyourass Ай бұрын
Um.. no.. it really isn't. Take a read of any of the major papers in the last 10 years.. you will see some pretty serious math in play.
@marilynlucas5128
@marilynlucas5128 Ай бұрын
😂exactly. Spent time listening to the presentation and I heard nothing important. I’m trying to understand the mathematical framework behind what he was presenting. The easiest way to convey your message in machine learning is to use mathematics, algorithms etc. If I don’t hear any of that, I can’t take the presentation seriously.
@KevinKreger
@KevinKreger Ай бұрын
@@marilynlucas5128 I found the git using google. and both papers are linked there. go for it.
@user-wr4yl7tx3w
@user-wr4yl7tx3w Ай бұрын
For me the presentation had too much jargons that it was difficult to follow and understand.
@alpha7s708
@alpha7s708 25 күн бұрын
Exactly. I really didn't get what he said all this time
@ekstrapolatoraproksymujacy412
@ekstrapolatoraproksymujacy412 Ай бұрын
editing is ridiculously bad
@technokicksyourass
@technokicksyourass Ай бұрын
Yeah.. they really needed to cut to the equations when he was referring to them. A bit if highlighting wouldn't go astray either.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Ай бұрын
Sorry, we are expanding editing team and upskilling new starters - would be super helpful if you didn’t mind giving more detailed feedback.
@KevinKreger
@KevinKreger Ай бұрын
@@MachineLearningStreetTalk When he is talking about and gesturing to a slide we should see it more-or-less in the same time frame.
@MachineLearningStreetTalk
@MachineLearningStreetTalk Ай бұрын
@@KevinKreger Yeah that's good feedback, we felt that the video would be more engaging if it didn't have powerpoint presentation vibes and tried to make it more "real" and make you feel like you were in the room with the speaker. We tried various ways of showing the slide and the speaker at the same time and the camera angles didn't work unfortunately (it was better on the AGI conference talks). We will take this feedback onboard
@MachineLearningStreetTalk
@MachineLearningStreetTalk Ай бұрын
@@KevinKreger www.dropbox.com/scl/fi/sys3iasc63lgj8lm5t0ld/JONAS_SLIDES.pdf?rlkey=ak6ir61a2pyhrfuwyvgrdvq66&st=9cloopv9&dl=0 here are the slides
@therobotocracy
@therobotocracy Ай бұрын
The market messed up in anthropremorphiszing AI, when ai is really more framed like a space. We got that wrong and it has led, and leading us, to bad places.
@KevinKreger
@KevinKreger Ай бұрын
Much of the analysis uses our human thinking process to imagine how the language model works or should work, so what is your point?
@therobotocracy
@therobotocracy Ай бұрын
@ ai isn’t thinking or doing anything with knowledge, it’s closer to a search. So all the comparisons of consciousness are because of this projection we put onto it. You would never think the internet or a computer is conscious, but for some reason we spend a lot of effort and time discussing how evil AI could be once it’s smart enough. Also the design of the products and uses are limited by this human like role we put onto ai.
Test-Time Adaptation: A New Frontier in AI
1:45:57
Machine Learning Street Talk
Рет қаралды 23 М.
Speculations on Test-Time Scaling (o1)
47:56
Sasha Rush 🤗
Рет қаралды 19 М.
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН
Sigma Kid Mistake #funny #sigma
00:17
CRAZY GREAPA
Рет қаралды 30 МЛН
What is “reasoning” in modern AI?
1:44:43
Machine Learning Street Talk
Рет қаралды 14 М.
Anil Ananthaswamy: ChatGPT and its ilk
1:12:36
TheHITSters
Рет қаралды 2,5 М.
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 675 М.
Vertical AI Agents Could Be 10X Bigger Than SaaS
42:13
Y Combinator
Рет қаралды 419 М.
It's Not About Scale, It's About Abstraction
46:22
Machine Learning Street Talk
Рет қаралды 102 М.
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
She made herself an ear of corn from his marmalade candies🌽🌽🌽
00:38
Valja & Maxim Family
Рет қаралды 18 МЛН