Kolmogorov-Arnold Networks (KANs) and Lennard Jones

  Рет қаралды 6,892

John Kitchin

John Kitchin

Ай бұрын

KANs have been a hot topic of discussion recently (arxiv.org/abs/2404.19756). Here I explore using them as an alternative to a neural network for a simple atomistic potential using Lennard Jones data.
See kitchingroup.cheme.cmu.edu/bl....

Пікірлер: 16
@dennyloevlie768
@dennyloevlie768 Ай бұрын
I was just about to reach out to you and ask what you thought of the KAN architecture. So glad to see you already made a video on it! Great video and explanation.
@JasonMitchellofcompsci
@JasonMitchellofcompsci 26 күн бұрын
It's basically just a different activation function with more tunable parameters than ReLU, yeah? I actually consider ReLU to be tunable because it's one way to took at your biases. Biases just set the threshold for when a signal goes through or not.
@bithigh8301
@bithigh8301 Ай бұрын
nice! can you make a tutorial about using emacs like that? One can go full focus on this env
@scottotterson3978
@scottotterson3978 9 күн бұрын
I guess the magic is that you can get an analytic expression from a trained KAN -- no idea how much better that expression would be than what you'd get from, say, PySR, but I can imagine that it could be better at extrapolation than an MLP.
@kanalarchis
@kanalarchis Ай бұрын
Thanks, that was a very good video. I don't understand the point of this. We already knew how to interpolate data with splines.
@PhysBrain
@PhysBrain Ай бұрын
Failure to extrapolate beyond the training set is a well know consequence of polynomial interpolation/regression. However, as polynomial regressors go, splines tend to be much better behaved outside of distribution than say, power series or Taylor series. Splines are highly flexible polynomial functions, but the basis functions are typically defined such that they have limited support (region over which they are non-zero). That means they do not go off to infinity or contribute other bad behavior to the function when evaluated far from their defined domain. So, at least with spline functions, there is an explicit acknowledgement that they will not produce useful extrapolations beyond the range of the provided data. However, as the paper describes, the limited (local) support is actually what allows KANs to store additional information without forgetting previously learned data. During training, the only spline parameters that are modified are the ones for which their basis functions evaluate to non-zero values (i.e. the parameters that are most closely associated with the new input data). Contrast this with weights on ReLU activation functions which are always modified when the the weighted sum of its inputs (multivariate) are above some threshold. The spline basis functions are only non-zero when their one input parameter (univariate) is within some well-defined range. So only the parameters local to the new data will be modified.
@jasmeetsingh4688
@jasmeetsingh4688 Ай бұрын
how can i install 'kan' package or did you write any script for it?
@aaronbelikoff8605
@aaronbelikoff8605 Ай бұрын
The package is called “pykan” but you import it as kan.
@user-rl8to5nc2q
@user-rl8to5nc2q Ай бұрын
I don’t see the advantage. Someone enlighten me?
@antonpershin998
@antonpershin998 Ай бұрын
More imterpretable
@user-rl8to5nc2q
@user-rl8to5nc2q Ай бұрын
@@antonpershin998 thamks
@MaJetiGizzle
@MaJetiGizzle Ай бұрын
No catastrophic forgetting.
@user-rl8to5nc2q
@user-rl8to5nc2q Ай бұрын
@@MaJetiGizzle hm how so?
@Chidorin
@Chidorin Ай бұрын
less resources needed for final model usage, may be 🤔
@starship9629
@starship9629 Ай бұрын
It is pronounced KolMOgoROV
KAN: Kolmogorov-Arnold Networks
37:09
Gabriel Mongaras
Рет қаралды 50 М.
Китайка и Пчелка 4 серия😂😆
00:19
KITAYKA
Рет қаралды 3,7 МЛН
Which one is the best? #katebrush #shorts
00:12
Kate Brush
Рет қаралды 24 МЛН
ТАМАЕВ vs ВЕНГАЛБИ. Самая Быстрая BMW M5 vs CLS 63
1:15:39
Асхаб Тамаев
Рет қаралды 4,7 МЛН
And this year's Turing Award goes to...
15:44
polylog
Рет қаралды 106 М.
Testing normality is pointless. Do this instead
22:57
Quant Psych
Рет қаралды 7 М.
What is "@total_ordering" in Python?
11:12
Indently
Рет қаралды 20 М.
AGI: Universality
14:24
Deep Foundations
Рет қаралды 1,5 М.
Владимир Арнольд: Про математику
1:34
Видеомысли
Рет қаралды 36 М.
Multihead Attention's Impossible Efficiency Explained
6:27
Animated AI
Рет қаралды 3,7 М.
Why the world NEEDS Kolmogorov Arnold Networks
7:07
ThatMathThing
Рет қаралды 21 М.
New xLSTM explained: Better than Transformer LLMs?
22:33
code_your_own_AI
Рет қаралды 5 М.
APPLE совершила РЕВОЛЮЦИЮ!
0:39
ÉЖИ АКСЁНОВ
Рет қаралды 2,4 МЛН
После ввода кода - протирайте панель
0:18
#miniphone
0:16
Miniphone
Рет қаралды 3,3 МЛН