I do not know you buddy. Saw your profile online and came here. Wherever you are please rest in peace. God Bless.
@ths3100 Жыл бұрын
A brilliant young mathematician, gone too soon. RIP.
@darioramirez-pico2041 Жыл бұрын
RIP 🕊️
@pilleater Жыл бұрын
RIP
@GregHuffman1987 Жыл бұрын
♤♤♤ ♡♡◇
@earnestinenelson2777 Жыл бұрын
RIH King Baddoo🙏
@normac5465 Жыл бұрын
You are with God In heaven 🙏
@phillustrator Жыл бұрын
RIP man
@phillustrator Жыл бұрын
RIP man
@fatunsinmodupe3572 жыл бұрын
Good day Dr. Peter Baddo. I wish to have your email address,
@fatunsinmodupe3572 жыл бұрын
Good day Dr. Peter Baddo. I wish to have your email address,
@aviskardhaval8182 жыл бұрын
I want you to make videos on how to incorporate viscous effects and seperation effects in potential flow
@orionxtc1119 Жыл бұрын
He died playing basketball a week ago
@aviskardhaval8182 жыл бұрын
really very nice peter
@jonathansaunders76653 жыл бұрын
Very interesting stuff and well explained! Just a small question, if a mapping is linear in the both the first and the second arguments, does that make it bilinear?
@peterj.baddoo38133 жыл бұрын
That's a very astute point; the standard linear kernel used in DMD (e.g. 13:08 and 15:30) is bilinear although more generic kernels such as Gaussian and polynomial are not!
@souvikdas77733 жыл бұрын
How to eliminate the redundant samples and obtain a sparse representation if the underlying space (here it is the product space (X x Y) from where the sample points (x,y) are collected) is high-dimensional (reasonably high)?
@peterj.baddoo38133 жыл бұрын
In that case, you can use linear PCA as opposed to kernel PCA (which is approximately what we're doing here, except without orthogonalisation). Indeed, you can combine linear PCA and kernel PCA if the state dimension is large in both the original and kernel spaces. Let me know if that doesn't answer your question!
@souvikdas77733 жыл бұрын
@@peterj.baddoo3813 Thank you. It will be nice if you could share a reference where these cases have been dealt.
@peterj.baddoo38133 жыл бұрын
@@souvikdas7773 Here's the original Kernel Recursive Least Squares paper that explicates the connection between dictionary learning and kernel PCA: ieeexplore.ieee.org/document/1315946 The Wikipedia page on PCA is quite good, including the subsection on nonlinear PCA: en.wikipedia.org/wiki/Principal_component_analysis Also, our paper is here: arxiv.org/abs/2106.01510
@souvikdas77733 жыл бұрын
@@peterj.baddoo3813 Thanks again.
@imicoolno13 жыл бұрын
how do you choose nu at 5:59?
@peterj.baddoo38133 жыл бұрын
nu represents the sparsity of the dictionary so larger nu implies a sparser dictionary. If you want a generalisable and fast model then larger nu is better, but if it's too large then the model can be inaccurate. Another view of nu is that it functions as a regulariser for the model, so increasing nu can also prevent overfitting. The algorithm is fast enough that you can try a few different nu's and pick the best one; at present, there is not an optimal way to choose nu a priori.
@imicoolno13 жыл бұрын
@@peterj.baddoo3813 thanks that makes sense. Are there any problems that can come with using the L2^2 norm as a distance metric in this context? I can see why you've used to get a direct solution, but could $\pi_t$ ever be sparse or something like that?
@peterj.baddoo38133 жыл бұрын
@@imicoolno1 Yes, great questions. One philosophical issue is that the L2 norm doesn't have a clear physical interpretation in the feature space induced by the kernel. In the original feature space, L2^2 usually corresponds to a measure of energy. So other norms may be more meaningful in certain applications; you could certainly adapt this work to look for a sparse $\pi_t$, but I don't know if the same updating equations will work.
@imicoolno13 жыл бұрын
@@peterj.baddoo3813 thank you very much Peter! Really fascinating work 🙂
@imicoolno13 жыл бұрын
would something like a fourier basis work as a kernel?
@peterj.baddoo38133 жыл бұрын
Absolutely, this is the idea behind the famous "Random Fourier Features"! people.eecs.berkeley.edu/~brecht/papers/07.rah.rec.nips.pdf
@imicoolno13 жыл бұрын
@@peterj.baddoo3813 Thanks!
@claudiocanalesd.68623 жыл бұрын
Great videos!
@alialedarvish41923 жыл бұрын
Thank you for your excellent presentation
@NoNTr1v1aL3 жыл бұрын
Amazing video!
@mohammedbelgoumri3 жыл бұрын
Most underrated research channel on KZbin! Fantastic papers 👏👏
@insightfool3 жыл бұрын
Thank you for such a clear explanation of this topic!
@krishnaaditya20863 жыл бұрын
Awesome Thanks!
@EtienneADPienaar3 жыл бұрын
Interesting and excellent presentation! I have two questions: 1) How does it perform for small samples? E.g., when you generate a short trajectory for the Lorenz system? 2) The dynamical systems you've presented are deterministic. How robust is the methodology where the systems are stochastic? E.g., a non-linear system of Stochastic Differential Equations.
@peterj.baddoo38133 жыл бұрын
Great questions! 1) It will depend on your aims but, as with many of these methods, more data is usually better. We find that a quantitative description of the spectrum needs nonlinear transients whereas a qualitative reconstruction doesn't need much data. Of course, the rank of the data is more important than the number of samples, so samples from different nonlinear regimes can be helpful. We are also working on a physics-informed version that requires far fewer samples than usual. 2) I have not tried the method yet for SDEs but I hope to in the future!
@zhenpeng70313 жыл бұрын
interesting work. the DMD, SINDy works to unforced rather than the nonlinear system. however, most of the real world system are non-autonomous. How can the LANDO method be applied to a nonlinear system with unknown external excitation.
@peterj.baddoo38133 жыл бұрын
Thanks for the question! There are a couple of ways to model this. One is to incorporate an unknown control variable into the model as we describe in appendix C. For a non-autonomous system you could include time as an explicit function of the kernel. On the other hand, if the transition matrix of the (nonlinear) system is varying in time then you could use the online version of the algorithm (described in appendix B) with an exponential weighting factor or windowing.
@zhenpeng70313 жыл бұрын
@@peterj.baddoo3813 thanks for your valuabe respond. I will follow up on this paper.
@zhenpeng70313 жыл бұрын
@@peterj.baddoo3813 Hi, Peter, Thanks for your reply. I've carefully read appendix C. Is the control force should be pre-known input, like DMDc. My question is the situation of an unknown control force. Hope to hear from you.
@harshavardhans39983 жыл бұрын
This looks really interesting. I have been using SINDy to discover the dynamics of my time series data and the results are not that great. I'm curious to apply LANDO and check what could be the difference. However, I have one question, do you think LANDO can capture dynamics if the data is stochastic and are observed at very few timepoints?
@peterj.baddoo38133 жыл бұрын
Thanks for the question, that sounds like a challenging scenario but it could be worth a try with LANDO! Sometimes the kernel representation can uncover a latent space that cannot be represented with finite-dimensional features. This can allow more efficient model identification, which could be relevant in your case.
@harshavardhans39983 жыл бұрын
@@peterj.baddoo3813 Thank you for your answer.
@AyyappanHabel3 жыл бұрын
Very interesting work
@zhihuachen36133 жыл бұрын
Great work! 非常棒的研究!
@NeoxX3173 жыл бұрын
Great work !!
@1337RecklessX3 жыл бұрын
Great work, I am interested in the implication of Kuramoto model of synchronization in neural oscillation and its impact on consciousness.
@kouider763 жыл бұрын
Thank you for this great presentation. I will defnitely consider projecting this method to the case of dynamic structure behaviour especialy active vibration control. Do you have the code open access ?
@peterj.baddoo38133 жыл бұрын
Thanks for your comment, Kouider! The code will be published open access here in the coming days: github.com/baddoo/LANDO
@kouider763 жыл бұрын
@@peterj.baddoo3813 Thanks @Peter. Waiting for more videos such this
@sebastiangutierrez64243 жыл бұрын
Really interesting!! I've two questions. 1) Have you tested this method with equations that have multiple scale phenomenon, like the Navier Stokes equation? 2) Is the method robust under perturbation of the data ? For example, adding to each measurements the realization of a normal distribution.
@peterj.baddoo38133 жыл бұрын
Hi Sebastian, thanks for the questions! 1) We are currently testing the method on data from channel flow simulations to learn the full Navier-Stokes equations! There is scope to include the effects of multiple scales in kernel design. 2) We discuss the sensitivity to noise in appendix E of the arXiv paper (arxiv.org/abs/2106.01510). Some problems might require smoothing the data before applying LANDO (e.g. via total-variation regularised differentiation).
@sebastiangutierrez64243 жыл бұрын
@@peterj.baddoo3813 Thanks a lot for the answers! Your work is really interesting. About the multiscale in kernel design, are multiple scales included by the different magnitudes of the weights for each kernel? I have an additional question, but it's about the general framework of data driven PDE/ODE identification. Do you know if these methods have been applied to delay ODEs?
@peterj.baddoo38133 жыл бұрын
@@sebastiangutierrez6424 Sure, you can include this both through the choice of weights and the type of functions included in the kernel. Similar methods have been applied to delay differential equations, but only in the linear case e.g. www.sciencedirect.com/science/article/pii/S2405896318309832
@sebastiangutierrez64243 жыл бұрын
@@peterj.baddoo3813 Thanks a lot !
@PhDHugo3 жыл бұрын
I liked the structure of your presentation, how did you edit the video like that? I would like to do the same for some activities at my college.
@peterj.baddoo38133 жыл бұрын
Hi Hugo, this was recorded using a "lightboard studio" e.g. www.lightboard.info/. You can see many great lightboard presentations on Steve Brunton's channel: kzbin.info
@fly-code3 жыл бұрын
great job!!!
@tommclean92083 жыл бұрын
the math is way beyond anything i understand but i still find this stuff fascinating, I wish I was able to do this stuff great video!