You are the Andrew Ng of dynamical ML! (that's a compliment!)
@Eigensteve3 жыл бұрын
I'm really flattered -- thanks!
@seneketh3 жыл бұрын
I spent so much with this paper, and this is you! Wow! Such a great work!
@Eigensteve3 жыл бұрын
Awesome, thanks!
@insightfool3 жыл бұрын
Love this historical explanation of the connection between Hopf bifrication and Naviar Stokes equation.
@Eigensteve3 жыл бұрын
Thanks! I always find it fascinating too
@insightfool3 жыл бұрын
@@Eigensteve So uh, a little hand waiving math incoming here, but, didn't they just map a cubic to a quadratic with the benefit of knowing that they had a periodic phenomena, and so they could break out the imaginary number (complex plane) to build a bridge across the two "dimensions". ie, from quadratic to cubic?
@arnold-pdev3 жыл бұрын
I presented the results of this paper for a course in turbulence back in 2018. I hadn't made the connection that you had co-authored it!
@arunprabhu81203 жыл бұрын
At the time 14:39, you mentioned that while identifying non-linear dynamical systems which have a kind of attractor, it is important to kick off the attractor. Do you see connections between this phenomenon and ML concept of model regularization or is it more of a lack of diversity in the measured data problem? Sorry if this is a naive question :)
@sophiakim1074 жыл бұрын
Hi Prof. B, I have 2 questions for the picture at 3:45: 1) Why do you only go up to order 5 when constructing your nonlinear candidate functions? 2) I didn’t quite catch how the xi matrix values were calculated. Could you briefly explain how those values are obtained without knowing the governing equations beforehand?
@Eigensteve4 жыл бұрын
Thanks for the questions! 1) usually I actually recommend starting with linear and seeing how well it does. If it works, we can stop there. If not, we add quadratic. If it works, we stop there. Then we add cubic, etc. So I always grow the library until we get a good model. In the Lorenz case, we could end at 2nd order. 2) The Xi values are calculated using a sparse regression algorithm called "sequentially thresholded least squares" (STLS), which is about 10 lines and is in the original paper (link in the comments). The basic idea is to perform a least squares to find Xi, and then kill all the small terms below a threshold. Then we do another least squares onto the remaining coefficients, and kill the small ones, and repeat until it converges. In practice, there are tons of algorithms to get a sparse Xi though. Lasso, Elastic Net, STLS, Bayesian, etc., and many of these have been tried by us or others in more recent papers.
@enterprixe3 жыл бұрын
Hello Steve! I came across with this video and I am actually very excited of the techniques you show here. Your entire channel is a source of inspiration for me as a recent animation creator and passionate scientist. Is the code still available? The link you provide does not seem to work anymore and I would like to take a look at the implementation of the method to know more about it
@aqgi74 жыл бұрын
Excellent video, thanks a lot for your effort! I have a doubt regarding the lorentz attractor. You said that if we, say, have measurements of just x; then the SVD of the Hankel matrix gives a set of "eigen-delay coordinates" that are topologically equivalent to x, y and z. My doubt is: In the equation for x_dot, we have an explicit dependence only on x and y. Although y is coupled to z and so x is implicitly coupled to z, isn't it surprising that measurements of x can give an eigen-delay vector that is equivalent to z as well? Also, is there a reconstruction algorithm to recover the time series vectors for x, y, and z from the topologically equivalent attractor? Could you share some resources on this?
@Eigensteve4 жыл бұрын
Great questions! It is very surprising that measurements of x can reconstruct an attractor that is diffeomorphic to the original attractor. This is a statement of the famous Takens embedding theorem, which is a landmark result in dynamical systems. 40 years later, we are still trying to figure out aspects of this theory. Currently, there is not a reconstruction algorithm, although this is an important current research topic in my lab.
@aqgi74 жыл бұрын
@@Eigensteve I see. Will read more about Taken's embedding thm. Exciting concepts. Thanks!
@soutrikband4 жыл бұрын
Hello Professor, Excellent Video. I have a doubt though. You said that one needs to create a set of features like x, x^2 and so on. To me this felt a lot like feature engineering that one does for a neural network. So can we interpret the feature table you created as a set of inputs for a neural network ? We can interpret the weights of the neural network as the coefficients of the linear regression and we can add the cost of weight values to our loss function. Would this direction of logic work? Thank you once again for this amazing lecture.
@pratyushbabe58683 жыл бұрын
How to choose the best derivative here! Prof. has used same Lorenz system to approximate derivative. Instead of using given (identified) system as derivative, what will happen if we use some diff. functions to compute derivative. I tried to compare the result but not successful. when some other discrete derivative functions are used, the model identified will not match to the original system( even if changing lambda value).
@Eigensteve3 жыл бұрын
Good question. We actually discuss this a bit in the original SINDy paper and more in the following PDE-FIND paper. In SINDy, we use the total variation regularized derivative to approximate the derivative from noisy measurements of the system state.
@40yearoldman Жыл бұрын
Is it more accurate to say non-linear functions, or linearly independent functions(wronskian nonzero)?
@rza_ramezanii2 жыл бұрын
Dr. Brunton, Hello, I have a temporal dataset obtained from LDA. Meaning that there are just time series and 1D-Velocity values(Wdot). So, I do not have access to the W values so that I am not able to create the library matrix. How can I use SINDy method to extract the governing equations? Best regards.
@BeyondWind3 жыл бұрын
Since this analysis was applied on the numerical simulation results, has gird independence study been done? I highly suspected that some of the high order flow patterns identified here could be grid related numerical artifacts.
@miladyazdanpanah38953 жыл бұрын
Suppose we obtain the dynamics of a system from time series data with amazing SINDY. Now we want to corss validation for dynamical system. How are the initial values of the states selected? make a video about cross validation please! tnx
@theidealisticman3 жыл бұрын
Hi Dr. Brunton. Would this algorithm work if I have input and corresponding output data for a system and I require a model of the system?
@AmanThakur-v8u Жыл бұрын
code can't be downloaded sort of problem in link I think
@jawadmezaal51924 жыл бұрын
Could this method (SIDNY) be used to reduce models orders? If I have a high fidelity model with 10x6 states and want to apply MPC, what is a good method to reduce order?
@zhihuadeng58144 жыл бұрын
This video is very helpful, I have a question to ask professor Steve Brunton, I have collected the data (state variable), how to use SINDYc to build the model?
@georgiawillmot79274 жыл бұрын
Hey did you get anywhere with this? I have the same thing as you I have data but I'd like to use the algorithm
@ahmed-pk6gy4 жыл бұрын
Me too. please share what you have got
@javinkhong52693 жыл бұрын
Is this SINDY algorithm applicable in 2d problem?
@ligezhang40004 жыл бұрын
Hello Dr. Brunton, I really appreciate the clarity of each video about dynamical system and how you can break down the math into a simple way to understand. I am researcher studying two-phase flow and complex fluids, and I am very curious and hopeful to apply this method to study the problem that I am working on. I have a couple questions: Can the SINDy method predict a system that will undergo instability? For instance, I have some data on fluid film break up. Can SINDy capture this kind of dynamical system?
@ErnestoMendoza-oo1fq Жыл бұрын
Dear Dr. Brunton; Unfortnetley; the link to the code seems to be dead. Have you posted the code somewhere else? Thanks
@ErnestoMendoza-oo1fq Жыл бұрын
I found the porbem. SOmhow the code was blocked when i tried to access it using Crome. I got full access using Edge. Thanks.
@mahan15984 жыл бұрын
Can this technique be used to predict the forex behavior?
@zebulon2203 жыл бұрын
Forex is a complex system not just a chaotic one. So short answer no. Long answer. Have ago.
@daviddalton93502 жыл бұрын
Dr Brunton: I am impressed with your knowledge of data science. I am interested in writing Financial Trading Algorithms. Are you interested in this subject and do you have any advice or directives as to how I might enhance my study and accomplishment of the same? David
@rexdalit35044 жыл бұрын
Excellent. Thx.
@dueck993 жыл бұрын
Thanks for the great video, Steve! Can you do a video on how exactly the L1 minimization is performed? I understand lasso, etc, and solving the problem with cross-validation; is this really how the minimization is performed in practice, or is there a more elegant technique?
@mikets422 жыл бұрын
May I suggest trying to identify a[ny] loudspeaker black-box non-linear model based on knowing what signal you feed in (music) and measured sound pressure (a microphone at, say, 0.5m)? That's a real-world problem easily reproducible even in a most primitive lab (or home). If you can do that, you will be able to "invert" it and produce "ideal" sound... which, so far, everybody failed. Miserably failed. If you have any degree of success, please let me know.