First I hope you see this comment we need a video on graph neural networks and we can’t find someone who breaks the topic down to this degree of simplicity so thanks for your help and we appreciate your efforts 🎉
@SerranoAcademyАй бұрын
Thank you so much! Great suggestion! I'm actually working on an explanation of GNNs, with some people who are friends, and some like sports and some like music. Hoping to get it out pretty soon! If you have any other suggestions, please feel free to throw them in, I'm always looking for good topics to learn and explain. :)
@AlteaimaАй бұрын
@@SerranoAcademy thank you again i hope you’re doing great
@revimfadli466627 күн бұрын
@@SerranoAcademycan you please link it to chemistry gnns and modular agents by deepak pathak?
@trantandat2699Ай бұрын
One of the best teacher i have seen so far. make complicated thing like this Kolmogorov Arnold Theorem to be very simple explanation
@SerranoAcademyАй бұрын
@@trantandat2699 thank you for your kind words, I’m glad you enjoyed it! :)
@Atlas929363 күн бұрын
Luis, I have the utmost respect for you. I’ve been keeping up with your content in various platforms, coursera, LinkedIn, KZbin, and I really think you’re a great human being. I related to your story about starting in mathematics and struggling as a student. Now you are well known in the ML community and make math more accessible for everyone. You are also conscious about social issues which is an overlooked quality. You’re clearly an achieved hard worker, yet humble. Thank you for the inspiration always.
@SerranoAcademy3 күн бұрын
Thank you for such kind message. It's a real honor to be part of your learning journey, and to share our desire for a better world. :)
@jamesmcadory1322Ай бұрын
This is one of the best educational videos I’ve ever seen. It went at a good pace, had helpful visuals, and I feel like I understand the main idea of this theorem now. Thank you for the video!
@frankl1Ай бұрын
Best explanation of KAT and KAN with intuitive drawings, very much appreciated
@znglelegendaire30052 күн бұрын
You are the best professor that I know at the moment in the world! Thank you very much for the explanations.
@shivakumarkannan95264 күн бұрын
Such a brilliant theorem and very clear explanation using diagrams.
@Gamingforfunpeace29 күн бұрын
Honestly this is amazing. Could you please create a 5 part video series on these visual explanations of the Langlands Proof that just came out ( you know which one ) ... You have a gift for Mathematical Storytelling , I absolutely loved the visualizations .... That is what math is about .... The elegance of visual storytelling... Would love to see your visualization of that proof
@sahil_shrmaАй бұрын
Wow! the everything in two-layer thing and summation part seems fantastic. Thank you! Luis 💚
@SerranoAcademyАй бұрын
@@sahil_shrma thank you so much, I’m glad you liked it! I was pretty amazed too when I first saw that the theorem implies the two-layer universality. :)
@BananthahallyVijay23 күн бұрын
🎉🎉🎉🎉 The most lucid video I've wanted to see on why in theory you need only one hidden layer in a NN. A big thanks to the content creator. ❤
@jasontlho28 күн бұрын
beautiful explanation
@sohaibahmed91657 күн бұрын
Thanks bro! You made it really simple. Highly recommended❤
@cathleenparsons343526 күн бұрын
This is excellent! Thanks so much, really helpful
@junborao89105 күн бұрын
Really helpful video. I really appreciate it.
@sunilkumarvengalil23053 күн бұрын
Nice explanation! Thank you!
@Sars78Ай бұрын
This IS the most important theorem to appreciate the power of DNN in general.
@neelkamal3357Ай бұрын
crystal clear as always
@HarshtherockingАй бұрын
i tried reading this paper in month of June 2024. Couldn't understand much of it. Thanks Luis for the amazing explanation.
@RasitEvduzen21 күн бұрын
Thanks for your beautiful explanation, I think next video should about Automatic Differentiation.
@behrampatel3563Ай бұрын
Louis I wish you health and happiness so you can continue to educate those of us who are way past their academic prime. For many reasons I never had the luxury of learning engineering . Khan academy , 3blue1brown and you made education accessible and approachable. Thank you , live long and prosper my friend. ❤
@djsocialanxiety1664Ай бұрын
awesome explanation
@SerranoAcademyАй бұрын
Thank you, I'm glad you like it!
@djsocialanxiety1664Ай бұрын
@@SerranoAcademyany chance on a video that explains the training of KANs
@SerranoAcademyАй бұрын
@@djsocialanxiety1664 this video has the architecture: www.youtube.com/watch?v=myFtp58U In there I talk a little bit about the training, which is mostly finding the right coefficients of the B-splines, using the usual gradient descent. AFAIK, the training is very analogous to a regular neural network, which is why I only mention it briefly, but if it's something more, I may make another video. If you know of any nuances in the training that can be explored, please let me know. Thanks!
@hayksergoyan8914Ай бұрын
nice job, thanks. Have you checked how this works for prediction of time series kind of data compared to LSTM,Arima ?
@alivaziri784311 күн бұрын
Thanks for the video! Are the slides available freely?
@SerranoAcademy8 күн бұрын
Thanks! Not yet, but I'll message here when they're out.
@eggs-istangel4232Ай бұрын
Not that I want to look like "oh I think there is a mistake" kid, but at 8:33 shouldn't first lower phi function(with respect to x_2) be phi_{1,2} (x_2) instead of phi_{2,1} (x_2)?
@SerranoAcademyАй бұрын
Thank you so much! Yes, you're absolutely right. And I think also in the first term, with \Phi_1, they should be \phi_{1,1}x_1 + \phi_{1,2}x_2. I changed it so many times, and it was so hard to get the indices right, lol...
@Pedritox0953Ай бұрын
Great video! Peace out
@csabaczcsomps7655Ай бұрын
Amazing.
@SohaKasra24 күн бұрын
That was too fluent as always ❤
@akirakato129329 күн бұрын
So essentially you can train non-linear regression or boundary models without the need to expand feature space by, for example, appending x1*x2 column to training set before performing fit? I can see that it's computationally better for finding an approximate solution and naturally less overfitting but how well does the computation complexity perform when accuracy requirement is extremely high?
@GerardoGutierrez-io7ss27 күн бұрын
Where can I see the proof of this theorem?😮
@jimcallahan44826 күн бұрын
What about log(x) + log(y) ? Of course, because you mentioned Kolmogorov I assumed you are talking about probabilities.
@SerranoAcademy25 күн бұрын
@@jimcallahan448 that’s a good example. Log(xy) is one that looks entangled, but can be written as log(x)+log(y), so it’s separable (i.e., a one layer KA network).
@colonelmoustacheАй бұрын
This was so good, but i feel like there should be a nice matrix way to write this. Time to search deeper i guess Great topic btw
@SerranoAcademyАй бұрын
Thanks for the suggestion! They do have a matrix with the capital \Phi's, multiplied by another one with the lowercase \phi's, where multiplication is instead composition of functions. I was going to add it here, but it started getting too long, so I had to cut it, but most other videos in the topic (plus the paper) have it.
@brandonprescott5525Ай бұрын
Reminds me of node based graphics software like Houdini or touchdesigner
@AI_ML_DL_LLM29 күн бұрын
Great video! You will definitely go to the heaven, see you there not soon :)
@tomoki-v6oАй бұрын
I have an engineering degree ,no PHD, I am ML enthusiast how can join research in this case ? . i dony want to work as data scientisc . because i like pla ying with math.
@moonwatcher200128 күн бұрын
❤
@sufalt123Ай бұрын
so coooooool
@tigu511Ай бұрын
oh god!... ¿the translation in spanish is from an AI?, is really bad