Kolmogorov-Arnold Networks: MLP vs KAN, Math, B-Splines, Universal Approximation Theorem

  Рет қаралды 33,328

Umar Jamil

Umar Jamil

Күн бұрын

Пікірлер: 131
@josephamess1713
@josephamess1713 8 ай бұрын
The fact this video is free is incredible
@umarjamilai
@umarjamilai 8 ай бұрын
You're welcome 🤗
@goldentime11
@goldentime11 8 ай бұрын
Thanks Umar for such a wonderful tutorial! I've been eyeing this paper for a while!
@edsonjr6972
@edsonjr6972 8 ай бұрын
Your videos are literally the only ones with 1hr+ I would ever watch on KZbin. Keep going mate, extremely high quality content 👏🏽👏🏽
@boredcrow7285
@boredcrow7285 Ай бұрын
Ive learnt so much from your videos, Ive been following you for about a year now starting from the diffusion implementation and the quality of content you post in here is insane. Thank you so much Jamil, people like you are the reason why AI community is so open and free for anybody to learn and explore.
@AdmMusicc
@AdmMusicc 8 ай бұрын
You're on a mission to make the best and friendliest content to consume deep learning algorithms and I am all in for it.
@nokts3823
@nokts3823 8 ай бұрын
Thanks a lot for making this accessible for people outside the field, for which reading and understanding these papers is quite tough. Thanks to you I'm able to stay slightly more up to date with the crazy quick developments in ML!
@MrNathanShow
@MrNathanShow 8 ай бұрын
The intro of a basic linked up linear layers was so well done and really makes this introduction friendly!
@kashingchoi564
@kashingchoi564 4 ай бұрын
Thank you for bringing me into the world of neural network. Your videos always make difficult topics become easier by interconnecting relevant concepts that greatly enhance the understanding to follow your mindset. I hope I can learn more knowledge from you and apply them into my life goal some day.
@BooleanDisorder
@BooleanDisorder 6 ай бұрын
I love that this research area develops fast enough that we need dedicated channels to explain new developments.
@franciscote-lortie8680
@franciscote-lortie8680 8 ай бұрын
Incredibly clear explanations, the flow of the video is also really smooth. It’s almost like you’re telling a story. Please keep making content!!
@mohamedalansary2542
@mohamedalansary2542 8 ай бұрын
Clearly explained and very valuable content as always Umar. Thank you!
@ChadieRahimian
@ChadieRahimian 7 ай бұрын
Thanks for the amazing explanation!
@manumaminta6131
@manumaminta6131 8 ай бұрын
Your videos help me (a grad student) really understand difficult, often abstract concepts. Thank you so much... I'll always support your stuff!
@xl0xl0xl0
@xl0xl0xl0 8 ай бұрын
Wow this was a super clear an on-point explanation. Thank you, Umar.
@odysy5179
@odysy5179 8 ай бұрын
Fantastic explanation!
@AlpcanAras
@AlpcanAras 8 ай бұрын
This is life changing, in my opinion. Thank you for the efforts on the videos!
@luigigiordanoorsini5980
@luigigiordanoorsini5980 8 ай бұрын
Ho appena letto la piccola bio del tuo canale, spero di non essere offensivo dicendo che adesso capisco perché il tuo ottimo inglese mi sembrasse comunque molto familiare. Ad ogni modo ti ringrazio enormemente per il tuo contributo hai spiegato tutta la teoria in un modo, a mio avviso, estremamente chiaro e soprattutto coinvolgente. Ti prego continua così, di nuovo un enorme grazie e complimenti per il tuo contributo alla scienza
@umarjamilai
@umarjamilai 8 ай бұрын
Grazie a te per aver visitato il mio canale! Spero di pubblicare più spesso, anche se per fare contenuti di qualità ci vogliono settimane di studio e preparazione. In ogni caso, spero di rivederti presto! Buon weekend
@luigigiordanoorsini5980
@luigigiordanoorsini5980 8 ай бұрын
@@umarjamilai Avevi già guadagnato un iscritto adesso hai guadagnato un fan. Ahahahahah
@bensimonjoules4402
@bensimonjoules4402 8 ай бұрын
Amazing content, thanks! I'm very excited about the continual learning properties of these networks.
@MirjanOffice
@MirjanOffice 8 ай бұрын
Hello Umar, this video is my best birthday gift I have ever received, thanks a lot :)
@JONK4635
@JONK4635 8 ай бұрын
Extremely clear explanation and content here! Very helpful. I am happy that you came from PoliMI as well :) keep it up!
@Adityagupta-vk9um
@Adityagupta-vk9um 6 ай бұрын
i don't comment on YT but man oh man, this man is love. Too good of an explanation.
@seelowst
@seelowst 8 ай бұрын
Having a such good teacher is so adorable, i wish i could be your students.
@umarjamilai
@umarjamilai 8 ай бұрын
哪里哪里啊,谢谢你的赞成!
@seelowst
@seelowst 8 ай бұрын
@@umarjamilai 太棒了,您还会中文👍
@umarjamilai
@umarjamilai 8 ай бұрын
@@seelowst 我就是刚刚从中国来的,在中国主了4年了,现在回欧洲了。
@seelowst
@seelowst 8 ай бұрын
@@umarjamilai 我从没离开过我的城市,我希望像您一样👍
@stacks_7060
@stacks_7060 8 ай бұрын
One of the best math videos I’ve watched on KZbin
@brandonheaton6197
@brandonheaton6197 7 ай бұрын
Best explanations of splines i have seen. Legit 100%
@zaevi6855
@zaevi6855 8 ай бұрын
crazy that it took me an hr video to understand that its the (control points) being trained on the spline graph vs weights with MLPs and CNNs, thank you!
@johanvandermerwe7687
@johanvandermerwe7687 8 ай бұрын
I saw this paper on papers with code, and thought to myself I wonder if Umar Jamil will cover this. Thanks for your effort and videos!
@MuhammadrizoMarufjonov-os5fv
@MuhammadrizoMarufjonov-os5fv 8 ай бұрын
Thanks for including prerequisites
@anirudh514
@anirudh514 8 ай бұрын
Thanks for the crystal clear explaination!!
@balachanders6350
@balachanders6350 6 ай бұрын
Great explanation and underrated also waiting for "Implementation of KAN from scratch" video
@cavidanabdullayev4533
@cavidanabdullayev4533 3 ай бұрын
It is a amazing resource for KANs. Thank you so much 🙂
@andreanegreanu8750
@andreanegreanu8750 8 ай бұрын
Very clear, well explained, top notch!
@emiyake
@emiyake 8 ай бұрын
Thanks!
@vanerk_
@vanerk_ Ай бұрын
Great, as always, thank you sir!
@alfredmanto5487
@alfredmanto5487 8 ай бұрын
Thanks
@MuhammadMuzzamil-ki4he
@MuhammadMuzzamil-ki4he 8 ай бұрын
Thank you for such great and detailed explanation.
@MasoudAminzadeh
@MasoudAminzadeh 4 ай бұрын
It was fantastic. continue my friend.
@harveyp.1949
@harveyp.1949 5 ай бұрын
Awesome explanation!!!
@enricovompa1876
@enricovompa1876 8 ай бұрын
Thank you for making this video!
@mychan-lu5iv
@mychan-lu5iv 3 ай бұрын
Amazing! Thank you very much for this.
@anmolmittal9
@anmolmittal9 8 ай бұрын
This is really great! Power to you!!🚀
@ansonlau7040
@ansonlau7040 8 ай бұрын
Thankyou Jamil, what a cool video
@howardmeng256
@howardmeng256 8 ай бұрын
Amazing video! Thanks a lot !
@lethnisoff
@lethnisoff 8 ай бұрын
Your explanations are the best, thank you so much😘🤗
@ScottzPlaylists
@ScottzPlaylists 8 ай бұрын
High quality explanations.. Thanks.
@paolobarbieri7483
@paolobarbieri7483 7 ай бұрын
Thank you for what you do, you are amazing.
@kmalhotra3096
@kmalhotra3096 8 ай бұрын
Hats off, what an awesome video!!!
@artaasadi9497
@artaasadi9497 8 ай бұрын
that is very useful, informative and interesting! Thanks a lot!
@ozgunsungar9370
@ozgunsungar9370 8 ай бұрын
awesome, easy to follow even person dont know anything :)
@vaadewoyin
@vaadewoyin 8 ай бұрын
Cant wait to watch this, saved! Will comment again when i actually watch it..😅
@DiegoSilva-dv9uf
@DiegoSilva-dv9uf 8 ай бұрын
Valeu!
@filippobargagna
@filippobargagna 8 ай бұрын
Thank you so so much for this amazing content.
@ezl100
@ezl100 8 ай бұрын
thanks Umar. Very nice explanation. Just 2 questions : 1 - Does it mean we can specify different knots per edge? 2 - I am not understanding how the backpropagation will work. Let's say we calculate the gradient from h1. It will update phi 1,1 and phi 1,2 but how the learning process will impact the knots to the desired value?
@jeunjetta
@jeunjetta 8 ай бұрын
I think KAN will be the catalist of a significant tipping point in science. I want to apply this to power system grids and replace existing dynamic models with ones made from PMU data using KAN
@zzduo-w2p
@zzduo-w2p 4 ай бұрын
Thank you for your excellent explainations 🤩🤩🤩🤩
@binfos7434
@binfos7434 6 ай бұрын
Amazing! Just wanted to ask if I should expect an implementation of this concept on this channel?
@GUANGYUANPIAO
@GUANGYUANPIAO 8 ай бұрын
awesome explanation
@prathamshah2058
@prathamshah2058 8 ай бұрын
Thank-you so much for explaining the paper, it is so easy to understand now, btw can you also make a hands on video with the kan package developed by mit which is based off pytorch.
@hajaani6417
@hajaani6417 8 ай бұрын
You’re fantastic, mate.
@samadeepsengupta
@samadeepsengupta 8 ай бұрын
Great Content !!
@subhamkundu5043
@subhamkundu5043 8 ай бұрын
Hey @Umar, great content as always. Looking forward to a KAN implementation video from scratch. Also I think in 31:01 there is a minor language mistake. I think it will be for using a quadratic Bspline curve rather than quadratic Bezier curve
@Lilina3456
@Lilina3456 6 ай бұрын
You are amazing, thank you!
@wolfie6175
@wolfie6175 8 ай бұрын
Good video, quality content.
@arupsankarroy8722
@arupsankarroy8722 8 ай бұрын
Sir, you are great..💙💙
@AD-zj7ck
@AD-zj7ck Ай бұрын
Thanks for the amazing video. Can you make video explaining and proving the universal approximation theorem?
@ashithen1833
@ashithen1833 6 ай бұрын
Much Thanks for this video
@RiteshBhalerao-wn9eo
@RiteshBhalerao-wn9eo 8 ай бұрын
Amazingg explanation !
@imanghotbi4651
@imanghotbi4651 6 ай бұрын
Is the explicit form of the obtained functions accessible after training the model and performing L-1 regularization? Is there a repository and code for it already?
@RomanLi-y9c
@RomanLi-y9c 8 ай бұрын
This is awesome!
@coolkaran1234
@coolkaran1234 8 ай бұрын
You are savior, without you mortals like me would be lost in the darkness!!!
@sergiorego6321
@sergiorego6321 8 ай бұрын
Phenomenal! Thank you :)
@mohamedessam3154
@mohamedessam3154 5 ай бұрын
Thanks for the video. For the first feature x0,1 we have 5 features for the same input x0,1 how the output is going to be different although they used the same input, grid size, degree and knot vector?
@yuningliu6300
@yuningliu6300 7 ай бұрын
at 2:21 you mentioned the documentation. where can I find it ?
@karanjakhar
@karanjakhar Ай бұрын
Nice explanation. Thank you. Please can you implement it also like you did for other videos.
@pabloe1802
@pabloe1802 8 ай бұрын
An implementation video will be awesome
@JuliusSmith
@JuliusSmith 8 ай бұрын
Excellent video, thanks! At the end, I _really_ wanted to see an illustration of the relatively "non-local" adaptation of MLP weights. Can that be found somewhere?
@danielegiunchi9741
@danielegiunchi9741 8 ай бұрын
brilliant video!
@willpattie581
@willpattie581 8 ай бұрын
One thing I didn’t catch: how are the functions tuned? If each function consists of points in space and we move around the points to move the B spline, how do we decide to move the points? Doesn’t seem like backprop would work in the same way.
@umarjamilai
@umarjamilai 8 ай бұрын
The same way we move weights for MLPs: we calculate the gradient of the loss function w.r.t the parameters of these learnable functions and change them in the opposite direction of the gradient. This is how you reduce the loss. We are still doing backpropagation, so nothing changed on that front compared to MLPs.
@dhackmt
@dhackmt 8 ай бұрын
i loved it sir .
@ntej7927
@ntej7927 6 ай бұрын
Excellent.
@p4ros960
@p4ros960 8 ай бұрын
bruh so good. Keep it up!
@faiqkhan7545
@faiqkhan7545 8 ай бұрын
Umar bhai you the great
@girandoconandrea
@girandoconandrea 5 ай бұрын
Ciao Umar. Innanzitutto grazie mille del tuo lavoro, sei una fonte di conoscenza infinita per come esponi gli argomenti. Ho seguito interamente questo video ed ho dei dubbi. All'inizio, quando introduci le b-splines si parla di control point in quanto punti che vengono dati come input e per i quali viene creata una curva che passa vicina ad essi secondo la base function. Successivamente, quando viene introdotto il network, si dice che ad essere trainate sono le funzioni ed in particolare i control points. Cosa vuol dire questo? I control points non sono gli input che diamo al modello e quindi i nostri dati che vogliamo approssimare ad una funzione? Sarei grato se mi chiarissi questo concetto. Grazie mille e buon lavoro :)
@umarjamilai
@umarjamilai 5 ай бұрын
L'unico parametro che definisci è il numero di control point (che ne determina la granularità, ovvero quanto "precisa" deve essere l'interpolazione). Compito di una rete neurale è "apprendere" i parametri di una funzione complessa per ridurre una funzione di costo (loss function). Quali sono i parametri che si allenano? La posizione dei control point, non il loro numero, che invece è deciso a priori. È come quando cerchi di interpolare dei punti usando un polinomio: prima scegli il grado del polinomio (quante potenze della X), poi usando un qualche algoritmo "alleni" i coefficienti di ciascuna potenza. Spero ora sia più chiaro
@shubhamrandive7684
@shubhamrandive7684 8 ай бұрын
Great explanation. What app do you use to create slides ?
@umarjamilai
@umarjamilai 8 ай бұрын
PowerPoint + a lot a lot a lot a lot a lot of patience.
@satviknaren9681
@satviknaren9681 8 ай бұрын
Please do post more ! please do more videos !
@グワ氏
@グワ氏 8 ай бұрын
There are continuous but indiferable points in the spline, right? What are you going to do?
@daleanfer7449
@daleanfer7449 8 ай бұрын
刚好期盼这个!
@umarjamilai
@umarjamilai 8 ай бұрын
期待你的评价😇
@daleanfer7449
@daleanfer7449 8 ай бұрын
❤很好的内容,有考虑做inverse rl的内容吗❤
@Kishan31468
@Kishan31468 8 ай бұрын
Thanks man. Next xLSTM please.
@fatemeshams9758
@fatemeshams9758 8 ай бұрын
awesome👍
@routerfordium
@routerfordium 8 ай бұрын
Thank you for the great video! Can you (or anyone) help understand why you need to introduce the basis functions b(x) in the residual activation functions?
@ikramaharchi1042
@ikramaharchi1042 2 ай бұрын
thank You so much
@plutophy1242
@plutophy1242 8 ай бұрын
this video is so amazing!!!!!!!
@fouziaanjums6475
@fouziaanjums6475 8 ай бұрын
Hi, can you please make a video on multimodal LLMs, fine tuning it for custom dataset...
@umarjamilai
@umarjamilai 5 ай бұрын
Check my latest video!
@akramsalim9706
@akramsalim9706 8 ай бұрын
awesome bro.
@AkhoNdlodaka
@AkhoNdlodaka 8 ай бұрын
THANK YOU
@rohitjindal124
@rohitjindal124 8 ай бұрын
Sir I have been a huge fan of your videos and have watched all of them . I am currently in my second year BTech and really passionate about learning ml sir if possible can work under you I don’t want any certificate or anything just want to see observe and learn
@RudraPratapDhara
@RudraPratapDhara 8 ай бұрын
Could you please next explain multi modal llms, techniques like Llava, llava plus, llava next?
@Patrick-wn6uj
@Patrick-wn6uj 8 ай бұрын
I waiting for that day too
@umarjamilai
@umarjamilai 5 ай бұрын
Check my latest video!
@RudraPratapDhara
@RudraPratapDhara 5 ай бұрын
@@umarjamilai Yeah checking out, your are as usual the G.O.A.T
@christopherc168
@christopherc168 6 ай бұрын
But what about wavelt Kolmogorov Arnold networks ?
@ai__76
@ai__76 8 ай бұрын
amazing
@bzzzzz1736
@bzzzzz1736 8 ай бұрын
thank you
@MrAloha
@MrAloha 8 ай бұрын
Wow! 🙏
KAN: Kolmogorov-Arnold Networks | Ziming Liu
1:34:56
Valence Labs
Рет қаралды 38 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 582 М.
Jaidarman TOP / Жоғары лига-2023 / Жекпе-жек 1-ТУР / 1-топ
1:30:54
«Жат бауыр» телехикаясы І 30 - бөлім | Соңғы бөлім
52:59
Qazaqstan TV / Қазақстан Ұлттық Арнасы
Рет қаралды 340 М.
Introduction to Vision: Convolutional Neural Networks
2:06:55
Priyam Mazumdar
Рет қаралды 136
The Kolmogorov-Arnold Theorem
23:00
Serrano.Academy
Рет қаралды 12 М.
"KAN: Kolmogorov-Arnold Networks" by Ziming Liu
57:23
Michael Levin's Academic Content
Рет қаралды 4 М.
Darts in Higher Dimensions (with 3blue1brown) - Numberphile
32:11
Numberphile
Рет қаралды 2 МЛН
Ziming Liu | KAN: Kolmogorov-Arnold Networks
1:00:51
London Machine Learning Meetup
Рет қаралды 4,3 М.
Week7 part1 Unsupervised learning lecture
53:28
Emma Robinson
Рет қаралды 24
Jaidarman TOP / Жоғары лига-2023 / Жекпе-жек 1-ТУР / 1-топ
1:30:54