Пікірлер
@cybersid
@cybersid Жыл бұрын
I do not know you buddy. Saw your profile online and came here. Wherever you are please rest in peace. God Bless.
@ths3100
@ths3100 Жыл бұрын
A brilliant young mathematician, gone too soon. RIP.
@darioramirez-pico2041
@darioramirez-pico2041 Жыл бұрын
RIP 🕊️
@pilleater
@pilleater Жыл бұрын
RIP
@GregHuffman1987
@GregHuffman1987 Жыл бұрын
♤♤♤ ♡♡◇
@earnestinenelson2777
@earnestinenelson2777 Жыл бұрын
RIH King Baddoo🙏
@normac5465
@normac5465 Жыл бұрын
You are with God In heaven 🙏
@phillustrator
@phillustrator Жыл бұрын
RIP man
@phillustrator
@phillustrator Жыл бұрын
RIP man
@fatunsinmodupe357
@fatunsinmodupe357 2 жыл бұрын
Good day Dr. Peter Baddo. I wish to have your email address,
@fatunsinmodupe357
@fatunsinmodupe357 2 жыл бұрын
Good day Dr. Peter Baddo. I wish to have your email address,
@aviskardhaval818
@aviskardhaval818 2 жыл бұрын
I want you to make videos on how to incorporate viscous effects and seperation effects in potential flow
@orionxtc1119
@orionxtc1119 Жыл бұрын
He died playing basketball a week ago
@aviskardhaval818
@aviskardhaval818 2 жыл бұрын
really very nice peter
@jonathansaunders7665
@jonathansaunders7665 3 жыл бұрын
Very interesting stuff and well explained! Just a small question, if a mapping is linear in the both the first and the second arguments, does that make it bilinear?
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
That's a very astute point; the standard linear kernel used in DMD (e.g. 13:08 and 15:30) is bilinear although more generic kernels such as Gaussian and polynomial are not!
@souvikdas7773
@souvikdas7773 3 жыл бұрын
How to eliminate the redundant samples and obtain a sparse representation if the underlying space (here it is the product space (X x Y) from where the sample points (x,y) are collected) is high-dimensional (reasonably high)?
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
In that case, you can use linear PCA as opposed to kernel PCA (which is approximately what we're doing here, except without orthogonalisation). Indeed, you can combine linear PCA and kernel PCA if the state dimension is large in both the original and kernel spaces. Let me know if that doesn't answer your question!
@souvikdas7773
@souvikdas7773 3 жыл бұрын
@@peterj.baddoo3813 Thank you. It will be nice if you could share a reference where these cases have been dealt.
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
@@souvikdas7773 Here's the original Kernel Recursive Least Squares paper that explicates the connection between dictionary learning and kernel PCA: ieeexplore.ieee.org/document/1315946 The Wikipedia page on PCA is quite good, including the subsection on nonlinear PCA: en.wikipedia.org/wiki/Principal_component_analysis Also, our paper is here: arxiv.org/abs/2106.01510
@souvikdas7773
@souvikdas7773 3 жыл бұрын
@@peterj.baddoo3813 Thanks again.
@imicoolno1
@imicoolno1 3 жыл бұрын
how do you choose nu at 5:59?
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
nu represents the sparsity of the dictionary so larger nu implies a sparser dictionary. If you want a generalisable and fast model then larger nu is better, but if it's too large then the model can be inaccurate. Another view of nu is that it functions as a regulariser for the model, so increasing nu can also prevent overfitting. The algorithm is fast enough that you can try a few different nu's and pick the best one; at present, there is not an optimal way to choose nu a priori.
@imicoolno1
@imicoolno1 3 жыл бұрын
@@peterj.baddoo3813 thanks that makes sense. Are there any problems that can come with using the L2^2 norm as a distance metric in this context? I can see why you've used to get a direct solution, but could $\pi_t$ ever be sparse or something like that?
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
@@imicoolno1 Yes, great questions. One philosophical issue is that the L2 norm doesn't have a clear physical interpretation in the feature space induced by the kernel. In the original feature space, L2^2 usually corresponds to a measure of energy. So other norms may be more meaningful in certain applications; you could certainly adapt this work to look for a sparse $\pi_t$, but I don't know if the same updating equations will work.
@imicoolno1
@imicoolno1 3 жыл бұрын
@@peterj.baddoo3813 thank you very much Peter! Really fascinating work 🙂
@imicoolno1
@imicoolno1 3 жыл бұрын
would something like a fourier basis work as a kernel?
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
Absolutely, this is the idea behind the famous "Random Fourier Features"! people.eecs.berkeley.edu/~brecht/papers/07.rah.rec.nips.pdf
@imicoolno1
@imicoolno1 3 жыл бұрын
@@peterj.baddoo3813 Thanks!
@claudiocanalesd.6862
@claudiocanalesd.6862 3 жыл бұрын
Great videos!
@alialedarvish4192
@alialedarvish4192 3 жыл бұрын
Thank you for your excellent presentation
@NoNTr1v1aL
@NoNTr1v1aL 3 жыл бұрын
Amazing video!
@mohammedbelgoumri
@mohammedbelgoumri 3 жыл бұрын
Most underrated research channel on KZbin! Fantastic papers 👏👏
@insightfool
@insightfool 3 жыл бұрын
Thank you for such a clear explanation of this topic!
@krishnaaditya2086
@krishnaaditya2086 3 жыл бұрын
Awesome Thanks!
@EtienneADPienaar
@EtienneADPienaar 3 жыл бұрын
Interesting and excellent presentation! I have two questions: 1) How does it perform for small samples? E.g., when you generate a short trajectory for the Lorenz system? 2) The dynamical systems you've presented are deterministic. How robust is the methodology where the systems are stochastic? E.g., a non-linear system of Stochastic Differential Equations.
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
Great questions! 1) It will depend on your aims but, as with many of these methods, more data is usually better. We find that a quantitative description of the spectrum needs nonlinear transients whereas a qualitative reconstruction doesn't need much data. Of course, the rank of the data is more important than the number of samples, so samples from different nonlinear regimes can be helpful. We are also working on a physics-informed version that requires far fewer samples than usual. 2) I have not tried the method yet for SDEs but I hope to in the future!
@zhenpeng7031
@zhenpeng7031 3 жыл бұрын
interesting work. the DMD, SINDy works to unforced rather than the nonlinear system. however, most of the real world system are non-autonomous. How can the LANDO method be applied to a nonlinear system with unknown external excitation.
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
Thanks for the question! There are a couple of ways to model this. One is to incorporate an unknown control variable into the model as we describe in appendix C. For a non-autonomous system you could include time as an explicit function of the kernel. On the other hand, if the transition matrix of the (nonlinear) system is varying in time then you could use the online version of the algorithm (described in appendix B) with an exponential weighting factor or windowing.
@zhenpeng7031
@zhenpeng7031 3 жыл бұрын
@@peterj.baddoo3813 thanks for your valuabe respond. I will follow up on this paper.
@zhenpeng7031
@zhenpeng7031 3 жыл бұрын
@@peterj.baddoo3813 Hi, Peter, Thanks for your reply. I've carefully read appendix C. Is the control force should be pre-known input, like DMDc. My question is the situation of an unknown control force. Hope to hear from you.
@harshavardhans3998
@harshavardhans3998 3 жыл бұрын
This looks really interesting. I have been using SINDy to discover the dynamics of my time series data and the results are not that great. I'm curious to apply LANDO and check what could be the difference. However, I have one question, do you think LANDO can capture dynamics if the data is stochastic and are observed at very few timepoints?
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
Thanks for the question, that sounds like a challenging scenario but it could be worth a try with LANDO! Sometimes the kernel representation can uncover a latent space that cannot be represented with finite-dimensional features. This can allow more efficient model identification, which could be relevant in your case.
@harshavardhans3998
@harshavardhans3998 3 жыл бұрын
@@peterj.baddoo3813 Thank you for your answer.
@AyyappanHabel
@AyyappanHabel 3 жыл бұрын
Very interesting work
@zhihuachen3613
@zhihuachen3613 3 жыл бұрын
Great work! 非常棒的研究!
@NeoxX317
@NeoxX317 3 жыл бұрын
Great work !!
@1337RecklessX
@1337RecklessX 3 жыл бұрын
Great work, I am interested in the implication of Kuramoto model of synchronization in neural oscillation and its impact on consciousness.
@kouider76
@kouider76 3 жыл бұрын
Thank you for this great presentation. I will defnitely consider projecting this method to the case of dynamic structure behaviour especialy active vibration control. Do you have the code open access ?
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
Thanks for your comment, Kouider! The code will be published open access here in the coming days: github.com/baddoo/LANDO
@kouider76
@kouider76 3 жыл бұрын
@@peterj.baddoo3813 Thanks @Peter. Waiting for more videos such this
@sebastiangutierrez6424
@sebastiangutierrez6424 3 жыл бұрын
Really interesting!! I've two questions. 1) Have you tested this method with equations that have multiple scale phenomenon, like the Navier Stokes equation? 2) Is the method robust under perturbation of the data ? For example, adding to each measurements the realization of a normal distribution.
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
Hi Sebastian, thanks for the questions! 1) We are currently testing the method on data from channel flow simulations to learn the full Navier-Stokes equations! There is scope to include the effects of multiple scales in kernel design. 2) We discuss the sensitivity to noise in appendix E of the arXiv paper (arxiv.org/abs/2106.01510). Some problems might require smoothing the data before applying LANDO (e.g. via total-variation regularised differentiation).
@sebastiangutierrez6424
@sebastiangutierrez6424 3 жыл бұрын
@@peterj.baddoo3813 Thanks a lot for the answers! Your work is really interesting. About the multiscale in kernel design, are multiple scales included by the different magnitudes of the weights for each kernel? I have an additional question, but it's about the general framework of data driven PDE/ODE identification. Do you know if these methods have been applied to delay ODEs?
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
@@sebastiangutierrez6424 Sure, you can include this both through the choice of weights and the type of functions included in the kernel. Similar methods have been applied to delay differential equations, but only in the linear case e.g. www.sciencedirect.com/science/article/pii/S2405896318309832
@sebastiangutierrez6424
@sebastiangutierrez6424 3 жыл бұрын
@@peterj.baddoo3813 Thanks a lot !
@PhDHugo
@PhDHugo 3 жыл бұрын
I liked the structure of your presentation, how did you edit the video like that? I would like to do the same for some activities at my college.
@peterj.baddoo3813
@peterj.baddoo3813 3 жыл бұрын
Hi Hugo, this was recorded using a "lightboard studio" e.g. www.lightboard.info/. You can see many great lightboard presentations on Steve Brunton's channel: kzbin.info
@fly-code
@fly-code 3 жыл бұрын
great job!!!
@tommclean9208
@tommclean9208 3 жыл бұрын
the math is way beyond anything i understand but i still find this stuff fascinating, I wish I was able to do this stuff great video!
@hfkssadfrew
@hfkssadfrew 3 жыл бұрын
Very interesting work!