KAN: Kolmogorov-Arnold Networks | Ziming Liu

  Рет қаралды 36,606

Valence Labs

Valence Labs

Күн бұрын

Пікірлер: 48
@ferencszalma7094
@ferencszalma7094 6 ай бұрын
0:02:35 Kolmogorov-Arnold Representation Theorem KART ~The only true multivariate function is the sum. 0:03:45 Details of (two layers) KART: 1d edge functions and node sums 0:05:05 KAN Kolmogorov-Arnold Network (orig 2-layer) 0:05:55 Multi-layer KAN 0:07:55 MLP and KAN comparison 0:09:45 B-splines basics 0:14:30 B-spline Cox-de Boor recursion formula (inefficient) 0:14:45 Implementation tricks: residual activations, initialization, grid update 0:38:05 Q: Expressivity vs generalization, bias-variance tradeoff, U-shape loss as fn of p (number of features) 0:39:15 Q: What if activation is out of range of the finite spline domain? -> Use the residual activation fn! 0:40:40 KANs to solve physics problems from raw data or already partially processed data? 0:43:15 KANs to solve PDEs? 0:44:35 Grid resolution finetuning is done manually 0:47:20 Can you replicate KANs by MLPs with the right breadth and depth? Yes. Would be nice to see a unified theory. 0:51:18 What's the novelty of KANs? At the technical level what makes a KAN a KAN? 0:58:16 Inductive bias: KAN's or DNN's inductive biases better fit a task: vision, language, science 0:59:25 History of connection vs symbolism 1957 - Frank Rosenblatt, Invention of perceptron 1969 - Marvin Minsky & Seymour Papert, Perceptrons: An introduction to computation geometry: "Perceptrons cannot do XOR" 1974 - Paul Werbos, "A multi-layer perceptron can do XOR" 1975 - Robert Hecht-Nielsen, Kolmogorov networks (2 layer, width 2n+1) 1988 - George Cybenko, "2 Layer Kolmogorov networks can do XOR" 1989 - Tomaso Poggio, "KA is irrelevant for neural networks 2012 - year of modern deep learning Expert systems/symbolic regression vs KANs vs MLP/Kolmogorov networks 1:04:20 KAN vs MLP phylosophy: High internal degrees of freedom, reductionism, parts are important vs Low internal degrees of freedom, holism, interaction of parts is important 1:08:45 Intricacies of developing something new: KANs beyond 2 layers 1:11:45 Github repos
@TomHutchinson5
@TomHutchinson5 6 ай бұрын
Wow, this is blowing up. Most of the journal club videos get hundreds of views. This already has thousands! I look forward to watching the talk and reading the paper.
@Pingu_astrocat21
@Pingu_astrocat21 6 ай бұрын
thank you for uploading this :)
@agranero6
@agranero6 2 ай бұрын
I didn't expect to se a mention to Jone polynomials...last time I talked about that was...well in the 80s.
@automatescellulaires8543
@automatescellulaires8543 6 ай бұрын
Yes we Kan ? I swear i've already heard this somewhere.
@HD-qq3bn
@HD-qq3bn 6 ай бұрын
I suggest to use piece wise function instead of spline, which show some similarities with FEM , which may easy to train
@gemini_537
@gemini_537 6 ай бұрын
Gemini 1.5 Pro: This video is about Kolmogorov-Arnold Networks (KANs) presented by Ziming Liu, a Phd student at MIT. KANs are a new type of neural network architecture inspired by the Kolmogorov-Arnold representation theorem. This theorem states that any continuous function can be represented as a finite sum of compositions of single-variable functions. The video talks about the following aspects of KANs: * Motivation: Why KANs were developed and what problems they address (0:00-2:22) * Mathematical foundations: Explanation of the Kolmogorov-Arnold representation theorem (2:22-7:44) * Visualization of KANs: How KANs are visualized as networks (7:44-12:12) * Training KANs: How to train a KAN to approximate a function (12:12-15:37) * Comparison with MLPs: How KANs compare to traditional Multi-Layer Perceptrons (MLPs) (15:37-20:22) * Applications of KANs: Examples of using KANs for symbolic and special function approximation (20:22-29:31) * Interpretability of KANs: How KANs can be interpreted to reveal the underlying structure of the function they approximate (29:31-41:26) * Discovery with KANs: How KANs can be used to discover new relationships between variables (41:26-47:22) * Case study: Recovering scientific results with KANs (47:22-58:12) * Open questions and future directions: Discussion on limitations and future research areas for KANs (58:12-1:00:00) In conclusion, KANs are a promising new direction in neural network research that leverages the Kolmogorov-Arnold representation theorem to achieve interpretable function approximation. They have the potential to be particularly useful in scientific applications where understanding the relationships between variables is important.
@hanyanglee9018
@hanyanglee9018 6 ай бұрын
You simply make activation functions for each instructions and protect the activation output between layers, it would probably work. Except for Idk how to protect the activation between layers in a graceful way. Softmax helps self attention to protect *that*. BN seems not to be used anymore, but it actually protects *that*. None of them is grace since they all distort the forward path in some way. Or, if we don't use the B spline, we still can use sigmoid (with MIG) to so similar job. Edit: sigmoid way doesn't provide any known interpredability. It's only about the black box way.
@Kram1032
@Kram1032 6 ай бұрын
So in principle, clearly you could simply take the functions KAN is built upon to be NNs. Furthermore, you could take a KAN of KANs, which strikes me as a second way to "go deep" on KANs. It also feels a little bit to me like the connections between objects, functions, functionals, natural transformations... - i.e. you'd essentially be able to encode category theoretical notions in KANs. - Is that a reasonable comparison to make? If so I wonder if you could simply take your base objects to be, say, the primitives of your favourite proof assistant plus arbitrarily deep, arbitrarily nested KANs to effectively efficiently find arbitrary functions that well represent whatever relationships you'd throw at them It's probably not at all easy to do, but that'd seem to me to be the most powerful version.
@oswack
@oswack Ай бұрын
So based on my understanding, this model's usability is hitched on the assumption that one has a "perfect" mapping function whereby no information is lost when applying Kolmogorov's theory to return the 1D edges? Because that in itself can be extremely difficult, even a near approximation.
@brian5735
@brian5735 6 ай бұрын
I like the 1d showing the integration. Great for PDEs
@deliyomgam7382
@deliyomgam7382 6 ай бұрын
Since u haven't given up on KAN u can apply normalization function to the whole data set eg: x=y^2 may be out of bounds for large value of x u can simply represent the section of b-spline where curve of differential would explode with a representation while keeping the curve = x=y^2 but on the side would be it's multipliers. Eg: u can represent billion with b as in calculator it also saves space. It's multipliers would show the different between x=y^2 nd nx=y^2.......I don't know I understood it right if it does best of luck for ur P.hd
@vegedog-ro1ce
@vegedog-ro1ce 5 ай бұрын
Thank you very much for your video. I still have some doubts, is the KAN network suitable for multiple outputs?
@vipulverma3640
@vipulverma3640 3 ай бұрын
Can we use this for time series to forecast the future value ?
@ntej7927
@ntej7927 4 ай бұрын
Interesting.......Thanks.
@deliyomgam7382
@deliyomgam7382 6 ай бұрын
Can Kan be extended to math transformer
@spencerfunk6697
@spencerfunk6697 5 ай бұрын
exactly 10% of your subs liked
@taraaryal9609
@taraaryal9609 6 ай бұрын
Do you also have an example to solve ODE using KAN?
@shinkurt
@shinkurt 6 ай бұрын
Thanks guys
@deliyomgam7382
@deliyomgam7382 6 ай бұрын
So n can be represented as function itself. Instead of going to infinity.
@space-time-somdeep
@space-time-somdeep 6 ай бұрын
Thanks
@radosawjasiewicz2494
@radosawjasiewicz2494 6 ай бұрын
What about vector functions?
@sunghjung45
@sunghjung45 6 ай бұрын
The question at 1:18:43 killed me 🤣
@sunghjung45
@sunghjung45 6 ай бұрын
kzbin.info/www/bejne/a6GXe3iOrMqVpac
@Qazi-it9mo
@Qazi-it9mo 2 ай бұрын
Guy took 5mins to ramble wtf
@PabloHorneman-rd4cq
@PabloHorneman-rd4cq 6 ай бұрын
Legend!
@deliyomgam7382
@deliyomgam7382 6 ай бұрын
Eg: π in circle is present so KAN is good for producing formula
@deliyomgam7382
@deliyomgam7382 6 ай бұрын
So circle x circle= donuts but to define direction u need trigonometry......eg: circle x sin 2 or somthing or sin circle or circle sin(x)= donuts invite homer please.....1 hole then train to find hole of knots...
@eduardocesargarridomerchan5326
@eduardocesargarridomerchan5326 Ай бұрын
Tutorial de Kolmogorov Arnold networks en castellano: kzbin.info/www/bejne/gJOcqIB5hbqfpMU
@araldjean-charles3924
@araldjean-charles3924 5 ай бұрын
Are we talking here about a general representation theory? Are b-splines the only basis set that can be used? What about wavelets, Fourier series, etc.?
@mrpocock
@mrpocock 5 ай бұрын
People are now experimenting with other curves. Radial basis functions seems to be a low cost drop in for splines. But people are using fourier or wavelet functions, for example, which are not at all splines.
@TeeTeeNet
@TeeTeeNet 6 ай бұрын
Hannes, if you say thank you after a speaker has answered your question you let them know that your done. Just saying “yup” is kinda rude.
@mohammedbenaissa1278
@mohammedbenaissa1278 6 ай бұрын
Can we make a cnn with kan layer
@jasdeepsinghgrover2470
@jasdeepsinghgrover2470 6 ай бұрын
Yes you can... Change the grid to a subset of pixels in the window
@deliyomgam7382
@deliyomgam7382 6 ай бұрын
LLM to design physical language representation.......sphere representing nothing then twist n stretch to represent some memories........... cluster of neuron might represent memory but it still is capable of processing.....since audio n videos have same zeros n 1
@elirane85
@elirane85 6 ай бұрын
God, I wish this entire "AI Boom" happened when I was in collage almost 20 years ago. I would be able to publish so many papers. Now its a sigmoid, boom paper, now it's an exponent, paper, now a spline, paper, what's next, directed graph, paper, fully connected graph, paper. When exactly did the level of research papers started to be like my freshman year homework?
@darkhydrastar
@darkhydrastar 6 ай бұрын
Piss off ghost
@srinjoy.bhuiya
@srinjoy.bhuiya 6 ай бұрын
Why so salty 😅?
@braineaterzombie3981
@braineaterzombie3981 6 ай бұрын
Bro's jealous
@cansacan7534
@cansacan7534 5 ай бұрын
The research meta nowadays is who imagines it first, implements it, puts on arxiv first
@JavierHerrera-od4ti
@JavierHerrera-od4ti 5 ай бұрын
You can still publish
@darkhydrastar
@darkhydrastar 6 ай бұрын
👏😎
@tankieslayer6927
@tankieslayer6927 6 ай бұрын
Tegmark attention-whoring again and giving a bad name to physicists. This is a completely worthless paper. Learning activiation functions isn’t a new idea it’s just unnecessary.
@choi77770
@choi77770 6 ай бұрын
You should give reasons for this comment
@tudoropran1967
@tudoropran1967 6 ай бұрын
A statement with no arguments is unscientific.
The Structure of a new AI architecture (KAN)
12:17
ThatMathThing
Рет қаралды 2,2 М.
Молодой боец приземлил легенду!
01:02
МИНУС БАЛЛ
Рет қаралды 1,5 МЛН
快乐总是短暂的!😂 #搞笑夫妻 #爱美食爱生活 #搞笑达人
00:14
朱大帅and依美姐
Рет қаралды 10 МЛН
Why the world NEEDS Kolmogorov Arnold Networks
7:07
ThatMathThing
Рет қаралды 26 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 1,3 МЛН
The Biggest Gap in Science: Complexity
18:46
Sabine Hossenfelder
Рет қаралды 358 М.
Ziming Liu | KAN: Kolmogorov-Arnold Networks
1:00:51
London Machine Learning Meetup
Рет қаралды 3,6 М.
AI can't cross this line and we don't know why.
24:07
Welch Labs
Рет қаралды 1,3 МЛН