The Kolmogorov-Arnold Theorem

  Рет қаралды 12,173

Serrano.Academy

Serrano.Academy

Күн бұрын

Пікірлер: 50
@Alteaima
@Alteaima Ай бұрын
First I hope you see this comment we need a video on graph neural networks and we can’t find someone who breaks the topic down to this degree of simplicity so thanks for your help and we appreciate your efforts 🎉
@SerranoAcademy
@SerranoAcademy Ай бұрын
Thank you so much! Great suggestion! I'm actually working on an explanation of GNNs, with some people who are friends, and some like sports and some like music. Hoping to get it out pretty soon! If you have any other suggestions, please feel free to throw them in, I'm always looking for good topics to learn and explain. :)
@Alteaima
@Alteaima Ай бұрын
@@SerranoAcademy thank you again i hope you’re doing great
@revimfadli4666
@revimfadli4666 27 күн бұрын
​@@SerranoAcademycan you please link it to chemistry gnns and modular agents by deepak pathak?
@trantandat2699
@trantandat2699 Ай бұрын
One of the best teacher i have seen so far. make complicated thing like this Kolmogorov Arnold Theorem to be very simple explanation
@SerranoAcademy
@SerranoAcademy Ай бұрын
@@trantandat2699 thank you for your kind words, I’m glad you enjoyed it! :)
@Atlas92936
@Atlas92936 3 күн бұрын
Luis, I have the utmost respect for you. I’ve been keeping up with your content in various platforms, coursera, LinkedIn, KZbin, and I really think you’re a great human being. I related to your story about starting in mathematics and struggling as a student. Now you are well known in the ML community and make math more accessible for everyone. You are also conscious about social issues which is an overlooked quality. You’re clearly an achieved hard worker, yet humble. Thank you for the inspiration always.
@SerranoAcademy
@SerranoAcademy 3 күн бұрын
Thank you for such kind message. It's a real honor to be part of your learning journey, and to share our desire for a better world. :)
@jamesmcadory1322
@jamesmcadory1322 Ай бұрын
This is one of the best educational videos I’ve ever seen. It went at a good pace, had helpful visuals, and I feel like I understand the main idea of this theorem now. Thank you for the video!
@frankl1
@frankl1 Ай бұрын
Best explanation of KAT and KAN with intuitive drawings, very much appreciated
@znglelegendaire3005
@znglelegendaire3005 2 күн бұрын
You are the best professor that I know at the moment in the world! Thank you very much for the explanations.
@shivakumarkannan9526
@shivakumarkannan9526 4 күн бұрын
Such a brilliant theorem and very clear explanation using diagrams.
@Gamingforfunpeace
@Gamingforfunpeace 29 күн бұрын
Honestly this is amazing. Could you please create a 5 part video series on these visual explanations of the Langlands Proof that just came out ( you know which one ) ... You have a gift for Mathematical Storytelling , I absolutely loved the visualizations .... That is what math is about .... The elegance of visual storytelling... Would love to see your visualization of that proof
@sahil_shrma
@sahil_shrma Ай бұрын
Wow! the everything in two-layer thing and summation part seems fantastic. Thank you! Luis 💚
@SerranoAcademy
@SerranoAcademy Ай бұрын
@@sahil_shrma thank you so much, I’m glad you liked it! I was pretty amazed too when I first saw that the theorem implies the two-layer universality. :)
@BananthahallyVijay
@BananthahallyVijay 23 күн бұрын
🎉🎉🎉🎉 The most lucid video I've wanted to see on why in theory you need only one hidden layer in a NN. A big thanks to the content creator. ❤
@jasontlho
@jasontlho 28 күн бұрын
beautiful explanation
@sohaibahmed9165
@sohaibahmed9165 7 күн бұрын
Thanks bro! You made it really simple. Highly recommended❤
@cathleenparsons3435
@cathleenparsons3435 26 күн бұрын
This is excellent! Thanks so much, really helpful
@junborao8910
@junborao8910 5 күн бұрын
Really helpful video. I really appreciate it.
@sunilkumarvengalil2305
@sunilkumarvengalil2305 3 күн бұрын
Nice explanation! Thank you!
@Sars78
@Sars78 Ай бұрын
This IS the most important theorem to appreciate the power of DNN in general.
@neelkamal3357
@neelkamal3357 Ай бұрын
crystal clear as always
@Harshtherocking
@Harshtherocking Ай бұрын
i tried reading this paper in month of June 2024. Couldn't understand much of it. Thanks Luis for the amazing explanation.
@RasitEvduzen
@RasitEvduzen 21 күн бұрын
Thanks for your beautiful explanation, I think next video should about Automatic Differentiation.
@behrampatel3563
@behrampatel3563 Ай бұрын
Louis I wish you health and happiness so you can continue to educate those of us who are way past their academic prime. For many reasons I never had the luxury of learning engineering . Khan academy , 3blue1brown and you made education accessible and approachable. Thank you , live long and prosper my friend. ❤
@djsocialanxiety1664
@djsocialanxiety1664 Ай бұрын
awesome explanation
@SerranoAcademy
@SerranoAcademy Ай бұрын
Thank you, I'm glad you like it!
@djsocialanxiety1664
@djsocialanxiety1664 Ай бұрын
@@SerranoAcademyany chance on a video that explains the training of KANs
@SerranoAcademy
@SerranoAcademy Ай бұрын
@@djsocialanxiety1664 this video has the architecture: www.youtube.com/watch?v=myFtp58U In there I talk a little bit about the training, which is mostly finding the right coefficients of the B-splines, using the usual gradient descent. AFAIK, the training is very analogous to a regular neural network, which is why I only mention it briefly, but if it's something more, I may make another video. If you know of any nuances in the training that can be explored, please let me know. Thanks!
@hayksergoyan8914
@hayksergoyan8914 Ай бұрын
nice job, thanks. Have you checked how this works for prediction of time series kind of data compared to LSTM,Arima ?
@alivaziri7843
@alivaziri7843 11 күн бұрын
Thanks for the video! Are the slides available freely?
@SerranoAcademy
@SerranoAcademy 8 күн бұрын
Thanks! Not yet, but I'll message here when they're out.
@eggs-istangel4232
@eggs-istangel4232 Ай бұрын
Not that I want to look like "oh I think there is a mistake" kid, but at 8:33 shouldn't first lower phi function(with respect to x_2) be phi_{1,2} (x_2) instead of phi_{2,1} (x_2)?
@SerranoAcademy
@SerranoAcademy Ай бұрын
Thank you so much! Yes, you're absolutely right. And I think also in the first term, with \Phi_1, they should be \phi_{1,1}x_1 + \phi_{1,2}x_2. I changed it so many times, and it was so hard to get the indices right, lol...
@Pedritox0953
@Pedritox0953 Ай бұрын
Great video! Peace out
@csabaczcsomps7655
@csabaczcsomps7655 Ай бұрын
Amazing.
@SohaKasra
@SohaKasra 24 күн бұрын
That was too fluent as always ❤
@akirakato1293
@akirakato1293 29 күн бұрын
So essentially you can train non-linear regression or boundary models without the need to expand feature space by, for example, appending x1*x2 column to training set before performing fit? I can see that it's computationally better for finding an approximate solution and naturally less overfitting but how well does the computation complexity perform when accuracy requirement is extremely high?
@GerardoGutierrez-io7ss
@GerardoGutierrez-io7ss 27 күн бұрын
Where can I see the proof of this theorem?😮
@jimcallahan448
@jimcallahan448 26 күн бұрын
What about log(x) + log(y) ? Of course, because you mentioned Kolmogorov I assumed you are talking about probabilities.
@SerranoAcademy
@SerranoAcademy 25 күн бұрын
@@jimcallahan448 that’s a good example. Log(xy) is one that looks entangled, but can be written as log(x)+log(y), so it’s separable (i.e., a one layer KA network).
@colonelmoustache
@colonelmoustache Ай бұрын
This was so good, but i feel like there should be a nice matrix way to write this. Time to search deeper i guess Great topic btw
@SerranoAcademy
@SerranoAcademy Ай бұрын
Thanks for the suggestion! They do have a matrix with the capital \Phi's, multiplied by another one with the lowercase \phi's, where multiplication is instead composition of functions. I was going to add it here, but it started getting too long, so I had to cut it, but most other videos in the topic (plus the paper) have it.
@brandonprescott5525
@brandonprescott5525 Ай бұрын
Reminds me of node based graphics software like Houdini or touchdesigner
@AI_ML_DL_LLM
@AI_ML_DL_LLM 29 күн бұрын
Great video! You will definitely go to the heaven, see you there not soon :)
@tomoki-v6o
@tomoki-v6o Ай бұрын
I have an engineering degree ,no PHD, I am ML enthusiast how can join research in this case ? . i dony want to work as data scientisc . because i like pla ying with math.
@moonwatcher2001
@moonwatcher2001 28 күн бұрын
@sufalt123
@sufalt123 Ай бұрын
so coooooool
@tigu511
@tigu511 Ай бұрын
oh god!... ¿the translation in spanish is from an AI?, is really bad
Their Boat Engine Fell Off
0:13
Newsflare
Рет қаралды 15 МЛН
Ozoda - Alamlar (Official Video 2023)
6:22
Ozoda Official
Рет қаралды 10 МЛН
Непосредственно Каха: сумка
0:53
К-Media
Рет қаралды 12 МЛН
The Discrete Fourier Transform
17:27
Serrano.Academy
Рет қаралды 5 М.
But what is the Central Limit Theorem?
31:15
3Blue1Brown
Рет қаралды 3,7 МЛН
The Dome Paradox: A Loophole in Newton's Laws
22:59
Up and Atom
Рет қаралды 1,6 МЛН
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 428 М.
The Simple Math Problem That Revolutionized Physics
32:44
Veritasium
Рет қаралды 8 МЛН
Does Infinity - Infinity = an Electron?
17:12
PBS Space Time
Рет қаралды 466 М.
Universal Construction | Category Theory and Why We Care 1.2
25:50
Eyesomorphic
Рет қаралды 27 М.
2024's Biggest Breakthroughs in Math
15:13
Quanta Magazine
Рет қаралды 657 М.
Their Boat Engine Fell Off
0:13
Newsflare
Рет қаралды 15 МЛН