Reproducing Kernels and Functionals (Theory of Machine Learning)

  Рет қаралды 2,983

ThatMathThing

ThatMathThing

Күн бұрын

Пікірлер: 20
@positivobro8544
@positivobro8544 6 ай бұрын
Ayo the legend keeps on giving
@avigailhandel8897
@avigailhandel8897 6 ай бұрын
I love your videos! I'm the person who posted that I will be starting grad school in the fall at the age of 55. I registered for classes at Montclair State University. Combinatorics, numerical analysis, linear algebra. And I'll be a TA. I am looking forward to being a graduate student in mathematics!
@JoelRosenfeld
@JoelRosenfeld 6 ай бұрын
That’s awesome! It sounds like you have a fun schedule too. Congrats and let me know how it goes!
@ethandills4716
@ethandills4716 6 ай бұрын
0:44 "if we have the time... and space" lol
@idiosinkrazijske.rutine
@idiosinkrazijske.rutine 6 ай бұрын
The third book at @0:22 is "Meshfree Approximation Methods with MATLAB by Gregory Fasshauer. A good resource for Radial Basis Functions and similar topics..
@JoelRosenfeld
@JoelRosenfeld 6 ай бұрын
Yeah it really is. I find Fasshauer does a great job at explaining the topic. Wendland goes into more of the theory, if you want to deeper. I met Fasshauer at a conference last year. Great guy
@richardgroff3807
@richardgroff3807 6 ай бұрын
I am missing something important at around 13:39 in the video. I understand the line h_i(t)= , which uses the reproducing kernel to evaluate h_i at time t. The inner product must be the inner product for H for this to work. The next line looks like the standard property of the inner product, i.e. the complex conjugate of the inner product with entries swapped. What confuses me is the next line, which seems to expand out the definition of the inner product, but rather than the inner product for H, it looks like the inner product for L^2 and I can't figure out why that is. Was there an adjoint lurking about somewhere (rather than a property of the inner product)? How do I see it? .
@JoelRosenfeld
@JoelRosenfeld 6 ай бұрын
It’s not an inner product. Each h_i represents some functional through the inner product. That expansion is the functional that h_i represents being applied to the kernel function.
@richardgroff3807
@richardgroff3807 6 ай бұрын
​@@JoelRosenfeld Thanks for your response! It finally sunk in. I didn't understand why you were swapping the entries, but it was to make the entries of the inner product match what was used with the Riesz Representation theorem a few lines above. In the next part of the video you do a numerical example where you generate a set of basis functions that are representations of the moments functionals, and then project function f onto that basis functions. By my understanding, the inner product used in your normal equations is the inner product for H, associated with the RBF kernel (defined at 9:00)? Is there an intuitive relationship between the best approximation using the norm associated with that inner product compared to, say, L^2? (I haven't watched your best approximation video, perhaps that question is answered there?)
@JoelRosenfeld
@JoelRosenfeld 6 ай бұрын
The “best” approximation depends on the selection of the inner product and Hilbert space. In the best approximation video for L^2 we started with a basis, polynomials, then we selected a space where they reside and computed the weights for the best approximation in that setting. If we change the Hilbert space, then we need to find new weights. However, there is a difficulty that can arise. Perhaps, it’s not so easy to actually compute the inner product for that basis in that particular Hilbert space. This approach avoids that because we take the measurements and THEN select the basis. So we end up with these h’s rather than polynomials. The advantage here is that we never actually have to compute an inner product, we just leverage the Riesz theorem to dodge around it. What I’m setting up here is the Representer Theorem, which we will get to down the line (maybe 5 videos from now?). There it turns out that the functions you obtain from the Riesz theorem are the best basis functions to choose for a regularized regression problem. This was a result of Wabha back in the (80s?)
@robn2067
@robn2067 6 ай бұрын
Very interesting video, but can you talk a little slower? Often it is not clear what words you are pronouncing, in particular for theorems.
@samueldeandrade8535
@samueldeandrade8535 6 ай бұрын
Man, just change to configurations of the video to watch it slower.
@JoelRosenfeld
@JoelRosenfeld 6 ай бұрын
Sorry if I talk too fast. I’ll work on it
@samueldeandrade8535
@samueldeandrade8535 6 ай бұрын
@@JoelRosenfeld you don't. You talk just fine.
@HEHEHEIAMASUPAHSTARSAGA
@HEHEHEIAMASUPAHSTARSAGA 6 ай бұрын
@@JoelRosenfeld Putting real subtitles on your videos would solve the issue. It's pretty easy these days, just put a transcript in and youtube will match up the times for you. It's probably even quicker to start with the automatic transcription and just fix the errors.
@JoelRosenfeld
@JoelRosenfeld 6 ай бұрын
@@HEHEHEIAMASUPAHSTARSAGA in the past the transcripts were pretty bad that were produced by KZbin. Premiere has a new AI feature that is actually pretty good at catching math terminology. I’ll give it some thought. Just takes more time
@jfndfiunskj5299
@jfndfiunskj5299 2 ай бұрын
you really need to improve your communication skills. This is a terrible exposition..
@JoelRosenfeld
@JoelRosenfeld 2 ай бұрын
@@jfndfiunskj5299 I’m always open to input. What could I change to improve it?
I tricked MrBeast into giving me his channel
00:58
Jesser
Рет қаралды 24 МЛН
ROSÉ & Bruno Mars - APT. (Official Music Video)
02:54
ROSÉ
Рет қаралды 207 МЛН
Smart Sigma Kid #funny #sigma
00:14
CRAZY GREAPA
Рет қаралды 88 МЛН
The Incredible Occupation Kernel! // A peek at my research
17:53
ThatMathThing
Рет қаралды 1,8 М.
Statistical Machine Learning Part 19 - The reproducing kernel Hilbert space
51:13
Tübingen Machine Learning
Рет қаралды 26 М.
Maxwell's Equations - The Ultimate Beginner's Guide
32:58
Up and Atom
Рет қаралды 72 М.
Why Runge-Kutta is SO Much Better Than Euler's Method #somepi
13:32
Phanimations
Рет қаралды 141 М.
Kernels6
14:09
Cynthia Rudin
Рет қаралды 1,7 М.
The Structure of a new AI architecture (KAN)
12:17
ThatMathThing
Рет қаралды 2 М.
Wavelets and Multiresolution Analysis
15:12
Steve Brunton
Рет қаралды 143 М.
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 277 М.
Kernels!
1:37:30
Machine Learning Street Talk
Рет қаралды 20 М.
How I animate 3Blue1Brown | A Manim demo with Ben Sparks
53:41
3Blue1Brown
Рет қаралды 876 М.
I tricked MrBeast into giving me his channel
00:58
Jesser
Рет қаралды 24 МЛН