Brian is one of the greatest teachers on KZbin. I really appreciate his way of teaching complex topics.
@BrianBDouglas10 ай бұрын
🥰
@aammoojanhastam3397 Жыл бұрын
What my professor couldn't reach in a semester I learned in a 14 minutes KZbin video. As always informative and simply brilliant. Thank you Brian
@-ion Жыл бұрын
Every now and then over several years, I have looked for and failed to find an explanation for the ARE solution of LQR that I manage to understand. This is it, finally. Thank you!
@tongvanngoc_gv-t90912 ай бұрын
After reading a bulk of documents for hours, I decided to watch Brian's videos. It's always the most effective way to gain a deep understanding of the problem
@JohnLangleyAkaDigeratus Жыл бұрын
This is an absolutely excellent presentation. Super complex. Somewhat esoteric for a lot of people, but the presentation and review and then seeing it implemented in matlab makes it seem so accessible! Thank you!
@BrianBDouglas Жыл бұрын
Thanks! I really appreciate that :)
@brennanukrainetz2282 Жыл бұрын
Amazing work. I'm an undergrad MecE student at Ualberta, and your lectures have been super helpful for intuition on what I'm learning. My textbook (Feedback Control of Dynamic Systems) is decent but sometimes they get lost in the weeds while missing the intuitive explanations. Thanks a bunch!
@mehmetkilic9518 Жыл бұрын
Honestly, I was waiting this topic for a long time :)
@ga6760 Жыл бұрын
I've spent more than a few hours trying to understand how Riccati equations work after Steve Brunton mentioned their importance in his control playlist. I could never get my head around exactly what it was or how it helps solve dynamics problems. I can tell after watching this a few more times I'll be on my way. You really have a knack for explaining not just the method but the motivation in the simplest way. Thanks Brian.
@mustafabhadsorawala652 Жыл бұрын
Same here! Had no idea why the Riccati equation pops up everywhere I go. Brian is an aritist!
@SamChak11 Жыл бұрын
Thanks, Brian. This is a very good introductory video on the Algebraic Riccati Equation. I hope there will be a continuation on this topic, such as solving the LQR problem or proving Lyapunov stability in Takagi-Sugeno Fuzzy Control Systems using the LMI Solvers from the Robust Control Toolbox.
@ft6637 Жыл бұрын
LMIs would be an interesting topic on their own already I think. Maybe in the context of robust control... I should check, if there is a video on this already 😅
@EddieSanchez9918 ай бұрын
Eres muy bueno explicando Brian, tus videos me han animado más y más a aprender acerca de teoría de control. Gracias!
@hoolladd5 ай бұрын
Could u make a video on Kalman filter & LQG?
@tatomans1982 Жыл бұрын
excellent explanation
@venkateshnayak5096 Жыл бұрын
Ahhh you just read my mind. A few days back I was wondering, "Why hasn't Brain made a video on ARE" and kaboom! You surprise us again.
@HansScharler Жыл бұрын
What other videos would you want Brian to make?
@ft6637 Жыл бұрын
This is just perfect timing, since I have an exam on this kind of topics just this semester 😁😁 Will there be anything on Hamilton functions and general optimal control (dynamic programming, variational calculus, Bellman principle) too quite soon?
@BrianBDouglas Жыл бұрын
Maybe eventually but nothing soon. Sorry! I can't be depended on for a regular schedule these days :)
@BCarli1395 Жыл бұрын
Thank you for an informative lesson.
@rspapero19833 ай бұрын
Thanks a lot. This video has dissolved any doubt.😊
@romaingrobety73597 ай бұрын
Hi all, I was desperately trying to do the same thing to make the discrete Ricatti algebraic equation appear for a cost function with the sum symbol, but I can't find any literature to help me understand how it appears from the cost function of a discrete system. If anyone has any clues I'd be more than happy to help. Many thanks for this exhaustive video, which has already enabled me to understand how the algebraic ricatti equation emerges from the cost function of a continuous system.
@hectorgautier406111 ай бұрын
Thanks for the content. I have some question if I may. I understand that at the beginning we want to find u to minimze J (because u is the only variable that we can tune). Then we make appears a term that has no influence on the value of J and that depends on the variable P. And from here I am loosing a bit the sense of the connection between our goal which is minimizing J with the transformation. We deduct that to minimize J we have to make in sort that the inside of the integral is zero, but to do so we constraint the shape of u with the variable P which has initially no impact on J. So now P has an importance because the expression of u depends on it, but P is outside of the integral so how can we know that making in sort that the inside of the integral is zero will really minimize J? I admit I am a bit confuse, I think prefer the explanation through the Lyapunov function.
@ИльяОсокин-э3д9 ай бұрын
I have exactly the same question, yep
@BrianBDouglas9 ай бұрын
I originally was using a calculus of variations approach to the derivation (which I think makes more sense) but it would have been a lot more math and perhaps too much for this video. I'm not sure I fully understand your question but let me try an answer. The matrix P doesn't change the value of J at all. We add in X0^t*P*X0 and then subtract it immediately. Both terms are initially outside of the integral, but with some clever algebra we pull one of the terms into the integral. Again the cost, J, is unchanged we just do this because it helps us solve the problem of finding the lowest J. At this point, there is a fixed cost outside of the integral that we can't change. But we do have the ability to affect the total cost of the integral itself. We then show that for a particular U (state feedback) and for a particular P (solution of the ARE) then the integral part is zero. This is the lowest absolute value it could be so there isn't a total cost that can be more optimal than this. Please clarify your question if I miss understood. Thanks!
@oldcowbb Жыл бұрын
my favourite thing is that the discrete riccati equation is analogous to value iteration
@ИльяОсокин-э3д9 ай бұрын
Hi Brian! Thank you for the video. I would be glad for a comment on a following question. In the derivation we introduce P, which stays outside of the integral. After that we set both terms under the integral to 0 by equating control to a product of a number of matrices and by demanding ARE to hold. However, it is not particularly clear for me, why does this exact value of P minimize J? Why couldn't there be a matrix P' that does not set the terms under the integral to zero at all times, but that minimizes x_0^T P x_0 somehow? I was not able to find this kind of derivation in the literature, everyone just applies HJB and that's it. I will be grateful for a clarification on that.
@BrianBDouglas9 ай бұрын
I responded below on @hectorgautier4061's comment. To add to it, are you asking why there isn't more than one P matrix that drives the integral term to 0 as time approaches infinity? Like for example, we know that the P in the video produces a summation of 0 for the integral, but what if there is another P that maybe adds the same amount from the first term as it subtracts from the second term and the summation is still zero? I haven't thought about this but I suspect that the quadratic nature of the cost function prevents this from being an option. Between this and answer below did that address your question? If not, could you clarify some more? Thanks!
@ИльяОсокин-э3д8 ай бұрын
Hi @@BrianBDouglas ! Yep, everything is crystal clear from the answer, thanks a lot!
@enricofioresi4550 Жыл бұрын
Could someone explane me why "the state goes to zero as the time approaches infinity" (min 8:35)? I mean, if I have some non-zero reference state that I want to be reached, and the closed-loop system is stable, shouldn't x converge to the reference state? Thank you! :)
@zguan6397 Жыл бұрын
in that case, it would be another form of Ricatti equation
@YangyadattaTripathy-w4r4 ай бұрын
Usually states(x) are considered in terms of error(e) and errors tend to zero as t tends to infinity.
@hussamhasan7777 Жыл бұрын
Thank you
@trendyprimawijaya314 Жыл бұрын
I still trying to figure out some points: What probably motivates Riccati to expand the cost function? Is it because he wants to explore the implicit relation between input u and state x (which ends up as u+R'B*Px) while maintaining the quadratic x terms independent in the resulting cost function?
@ahmedhassaine3647 Жыл бұрын
I think that the riccati equation is nothing but a form of ODE in a matrix form which is the direct result of the optimisation of the cost function in terms of U(the input) and the state X constrained by the state equations, so in some sense what the riccati equation do is that it gives us an analytic approach to find a P matrix that optimise the state feedback U = KX where K = R’BP ,R and B are time-independent only P is time dependent Thus the riccati equation in reality has nothing really to do with the cost function from a conceptual perspective, it’s nothing more than an analytical tool to find a mattix P that optimise the cost function.
@trendyprimawijaya314 Жыл бұрын
@@ahmedhassaine3647 I mean, how someone get the insight to introduced matrix P from the beginning into an already simple cost function, going through partial integral and quadratic completion tricks, while it ends up as a longer cost function?
@ahmedhassaine3647 Жыл бұрын
@@trendyprimawijaya314 i know it’s not intuitive at first and the video was just an introduction, infact mr.Brian took a different approach to introduce the Riccati algebraic equation, typically the problem starts from a variation calculus perspective, Remember what we r trying to do here is that we r trying to find the best u that minimise the cost function subject to system dynamics. Based on lagrangian multipliers we introduce a function called the Hamitolian = L(lagrangian)+lemda’ * Xdot The magic and confusion starts with that lemda which is called the adjacent state u can think about it that for every state there is a virtual state that manifest itself on the boundaries of the constraints ending up with u = -Rinverse*B’*lemda When we go further we end up with a system of equations written in its matrices compacted format where the unknowns are X and lemda the state and the co-state or the adjacent states. The only way to solve this is by assuming that X and lemda are coupled lemda=alpha*X taking in account that lemda and X are vector alpha needs to be a symmetric matrix P thus lemda=PX so U becomes = -Rinverse*B’*P*X with some algebraic adjustments here and there u will end up with the riccati algebraic equation with respect to P So that’s where P is coming from hope this would satisfy you.
@abdelz1617 Жыл бұрын
Great video!
@hyeonseokseong24313 ай бұрын
이보다 명료할순 없다. 당신은 신입니까?
@GH-li3wj Жыл бұрын
very clear thanks!
@antongubankov4395 Жыл бұрын
brilliant!!
@myelinsheathxd7 ай бұрын
Thx for explanation
@narendra615322 күн бұрын
Thank you so much!
@MATLAB22 күн бұрын
Glad it was helpful!
@WeiHuang-x2b15 күн бұрын
I found something wrong in this derivation. Emm... maybe it should be xT(PBR^-1TBTP)x rather than xT(PBR^TBTP)x, plz get someone to check if it is wrong.
@BrianBDouglas15 күн бұрын
Could you point me to where you see that? Are you talking about the end of the equation at 10:58? I'm asking because I don't see this written exactly like this: xT(PBR^TBTP)x. I don't what the R^T part is. Am I missing something somewhere else in the video? Thanks!
@WeiHuang-x2b13 күн бұрын
@@BrianBDouglas Thanks for ur reply, I have solve my confusion today when i come to realize R is a diagonal matrix. One thing should be point out in the beginning of video is that R matrix is a diagonal matrix as P, so that R^(-1)'s transpose matrix equals R^(-1). I hope that through our dialogue, others will not have difficulty in the derivation process. :)
@WeiHuang-x2b13 күн бұрын
@@BrianBDouglas "T" i mention in this comment is Transpose notation for matrices
@medmdeghri5659 Жыл бұрын
Does anyone knows hot to numericly solve the Algerbrique Riccati Equation?
@HansScharler Жыл бұрын
Do you mean solving without using idare?
@YangyadattaTripathy-w4r4 ай бұрын
You can take an example from reference textbook. A problem of lower dimension can be solved by hand. But for higher dimension you need some numerical solver.
@andreabettani1457 Жыл бұрын
Brilliant
@xptransformation3564 Жыл бұрын
briallant!
@basharfahed7 Жыл бұрын
❤❤❤
@shuvodev37089 ай бұрын
❤
@ahmedhassaine3647 Жыл бұрын
I think it was better if you rely on introducing the ALG Reccati equation through the variations calculus, since the cost function is nothing but a functional, thus optimizing it means finding the function that minimizes it, personally, I think it is more intuitive because it a indirect application of the least action principal.
@BrianBDouglas Жыл бұрын
I started with a calculus of variations explanation at first but it took too long and was very math heavy so I pivoted to this. I know it's not ideal but these Tech Talks are really just introductions to topics to get people interested to learn more and not necessarily complete explanations.
@ahmedhassaine3647 Жыл бұрын
@@BrianBDouglas yeh i can see your raisons mr.Brian which are logical considering the mathematical background required to digest the concept, i would like to thank you personally for your tech talks since they helped me in many cases to have the general idea before going deep in it, and i would like to apologies for my comment if it was inappropriate.
@BrianBDouglas Жыл бұрын
@@ahmedhassaine3647 Not inappropriate at all! No need to apologize. I wish I could have a found a way to make it work for my video. Thanks for the comment.
@WeiHuang-x2b15 күн бұрын
@@BrianBDouglas I found something wrong in this derivation. Emm... maybe it should be xT(PBR^-1TBTP)x rather than xT(PBR^TBTP)x, plz get someone to check if it is wrong.