LQR Method (Dr. Jake Abbott, University of Utah)

  Рет қаралды 77,021

JJAbbottatUtah

JJAbbottatUtah

Күн бұрын

Пікірлер: 81
@tylerhamer
@tylerhamer 8 жыл бұрын
Wow. This was great! None of my professors at MIT have been able to teach this physical intuition. Thanks so much!
@smilesmile787
@smilesmile787 8 жыл бұрын
+Tyler Hamer yes, his example was very straight forward
@mkhalil007
@mkhalil007 3 жыл бұрын
OoOo
@venkr1728
@venkr1728 7 жыл бұрын
Brilliant! A good teacher explains the complexity of a problem, but a great teacher simplifies a complex problem. This is an example of great teaching..
@ozyozk9466
@ozyozk9466 6 жыл бұрын
Hands down the best LQR tutorial I have ever seen.
@cainghorn
@cainghorn 8 жыл бұрын
Speaking as a Msc in automation, I have never had someone explain this to me so clearly. Why would you ever go to a university, with pearls like these found on YT?:)
@christophberger599
@christophberger599 7 жыл бұрын
Really nice video, studying in vienna and did not get an idea how exactly the LQR worked, but now i understand. Thank you.
@imanplus
@imanplus 12 жыл бұрын
Thanks Dr. Abbott. That was the best explanation for LQR I've ever seen. Keep it up.
@magdiyouseff4661
@magdiyouseff4661 6 жыл бұрын
its the first time i understand the reason behind creation the objective function J. thank you for your clear explanation. thank you
@ericwindhede5937
@ericwindhede5937 12 жыл бұрын
Thank you. Very nice way to introduce LQR. I´m looking forward to see you talking about Kalman Filter next. :)
@salmann94
@salmann94 7 жыл бұрын
One of the best explanations for the topic. Thank you!
@alial-heji3282
@alial-heji3282 9 жыл бұрын
Thank you for posting this. Very easy to follow and understand.
@vksateesh
@vksateesh 7 жыл бұрын
Very good explanation sir. Thank you.
@dedenmusa
@dedenmusa 12 жыл бұрын
Thanks Dr. Abbott. Greeting from Indonesia.
@devkumanan1773
@devkumanan1773 9 жыл бұрын
Great video, very well explained and easy to follow ! Thank you.
@klam77
@klam77 3 жыл бұрын
WOW ....clarity. i was also hoping for some intuition why they call it LQR! Basic math analysis says LQ is where you solve a quadratic ("Q") to meet a Linear ("L") constraint, but i don't see how that ties to Riccatti etc.....but otherwise you're video is TOPS!
@klam77
@klam77 3 жыл бұрын
oop! I see it now. It's the same: a quadratic cost function (incorporating input and state) subject to linear (proportional to state) input! Linear-Quad.
@zyot511
@zyot511 10 жыл бұрын
Thanks from Colombia. I really like your videos!
@mohamadmawed6078
@mohamadmawed6078 7 жыл бұрын
Your explanation is really amazing . Thanks a lot sir .
@_electro_101
@_electro_101 9 жыл бұрын
Very well explained. I happened to see a general case of choosing R=1 and Q=C`C (c transpose*c). If so, Why?
@kaizhou7331
@kaizhou7331 8 жыл бұрын
Very good explanation to a starter
@youssefabsi6296
@youssefabsi6296 Жыл бұрын
that was reaaally good. thanks
@thibaultcantou3508
@thibaultcantou3508 9 жыл бұрын
Thanks a lot for those very clear explanations !
@PronoyBiswas
@PronoyBiswas 10 жыл бұрын
Excellent video - very well explained
@luisdamed
@luisdamed 7 жыл бұрын
Thank you!
@JadtheProdigy
@JadtheProdigy 5 жыл бұрын
From a high level point of view, you give the LQR controller a trajectory X from 0:T, and a cost for trajectory deviation Q, a cost for effort R, and it returns the effort U from 0:T, as well as Xnew 0:T, such that the dynamic constraints are met, while J is minimized? In other words, X and Xnew may be different?
@versatran01
@versatran01 11 жыл бұрын
Nice video! Explains everything clearly! Great job!
@isaacsilva9631
@isaacsilva9631 11 жыл бұрын
you are the best man. thanks very much from brazil.
@bobanisback
@bobanisback 7 жыл бұрын
Thank you
@multimirage
@multimirage 9 жыл бұрын
Well Said!
@wszolasss
@wszolasss 10 жыл бұрын
SOOO HELPFUL! THANK YOU!
@viktorsawtschenko440
@viktorsawtschenko440 11 жыл бұрын
Very nice video!
@AN-zk7kz
@AN-zk7kz 6 жыл бұрын
Many thanks for the great video ! I would like to ask about the name of the method you followed to initialize the values of Q and R i.e. when you chose to start with the square of the maximum value inverse. Many thanks in advance !
@p.z.8355
@p.z.8355 5 жыл бұрын
So how would I change the control law to make it a tractor instead of regulator ?
@georgevarghese5887
@georgevarghese5887 7 жыл бұрын
This was vry well explained.. i need to know how can we use abc algorithm to decide the Q and R parameters..
@HarishKumar-gw8bz
@HarishKumar-gw8bz 6 жыл бұрын
Well, you can use the pso algorithm, which is quite easy to apply compared to the abc one.
@spartanarmado
@spartanarmado 4 жыл бұрын
Very usefull! thaks!
@psjacome
@psjacome 10 жыл бұрын
Thanks a lot for your tutorial. In the lecture you said it is necessary to simulate the system in order to choose Q and R values, Do you have some examples of simulations in Matlab??
@jalpeshlimbola3958
@jalpeshlimbola3958 7 жыл бұрын
have you got the solution of choosing the values of Q and R?
@mohamadballout3847
@mohamadballout3847 5 жыл бұрын
WOOOOWWWW!! Awesome!
@LuisRocha2
@LuisRocha2 10 жыл бұрын
Great tutorial, keep it up x)
@mohamedessamish
@mohamedessamish 8 жыл бұрын
a question plz if system is assym. stable and input excites a system then inputvanishes my question is will the states of sytem vanishes and what will happen to the output will it also vanishes ??? thanks for ur respond in advance
@ameerjanabi917
@ameerjanabi917 10 жыл бұрын
awesome
@zoro19840
@zoro19840 11 жыл бұрын
Really thankful .. (y)
@salmansircar5606
@salmansircar5606 9 жыл бұрын
Thanks your explanation was very helpful
@Sasipano
@Sasipano 10 жыл бұрын
thanks for a great introduction.... Moreover, I was curious of using it for my Inverted Pendulum Project.... So can you help me with that
@bilalahmad-qo4wk
@bilalahmad-qo4wk 10 жыл бұрын
I think that I can help you, what kind of problem are you facing?
@alonealonesupervisor537
@alonealonesupervisor537 8 жыл бұрын
+bilal ahmad brother do you have the textbook of this course
@LNasterio
@LNasterio 5 жыл бұрын
This is when you realize professors from some top end universities are JOKE!
@manofsteeeeel-j1g
@manofsteeeeel-j1g 4 жыл бұрын
+1
@klam77
@klam77 3 жыл бұрын
well....they are just too comfortable with abstraction and leap ahead too quickly.....but to beginners it appears there are NO anchors in the material!
@pnachtwey
@pnachtwey 4 жыл бұрын
+1 for saying zeros can cause overshoot even though the closed loop poles are on the negative real axis in the s domain. So now the problem changes from guessing where to put the closed loop poles to guessing on how to chose the weights. It seems like there needs to be yet another level of optimization to select the best weights for Q and R. It seems there are many possibilities for optimal or the term optimal is used loosely. Saturation isn't as big a problem as feedback resolution. BTW, it is possible to place zeros too.
@jalpeshlimbola3958
@jalpeshlimbola3958 7 жыл бұрын
theoretical understanding is ok and but the major problem about LQR method is that how to decide the perfect value of the matrix Q and R for to implement in the real physical system. if any one does know please let me know..thanks..!!
@niteshagrawal486
@niteshagrawal486 7 жыл бұрын
it depends on the system you are considering. Though main idea is to minimize or maximize J
@brianblasius
@brianblasius 3 жыл бұрын
Use Bryson's rule. But this only gives you a starting point. So after using the rule, you can fine tune by hand.
@NSAwatchesME
@NSAwatchesME 9 жыл бұрын
well i understood none of that
@luzltorai
@luzltorai 8 жыл бұрын
That is a great video. Thank Dr. Jake Abbott
@rezah336
@rezah336 6 ай бұрын
i just place all closed loop poles on the negative real axis to get no overshoot and adjust the speed by moving it along the axis. The speed is determined by the control signal.
@brucemurdock5358
@brucemurdock5358 Жыл бұрын
By far the best teacher for control systems and linear algebra (2nd one is arguable haha)
@guangweiwang9228
@guangweiwang9228 7 жыл бұрын
Great physical intuition, thank you.
@giancarlokuosmanen9723
@giancarlokuosmanen9723 3 жыл бұрын
Awesome lecture, thanks!
@EricLikesTurtles
@EricLikesTurtles 9 жыл бұрын
Thanks for posting this. Very clear and helpful. Question: How do I select Q when one or more states is not observable, but is stable? For example, suppose the speed of a DC motor (generalized as a stable, first-order system with known dynamics and a voltage input) is itself the control input to a fully controllable, linear system. Furthermore, suppose that for whatever reason, the motor speed is not observable. When I append the the motor dynamics to the system, I now have an additional stable but unobservable state. Since I cannot measure the speed of the motor, I ultimately should have a feedback gain of zero that corresponds to motor speed. I assume that for the Q matrix, I would set the q term corresponding to the motor speed to zero. Is this correct? Thanks for your time.
@sumitshrestha2653
@sumitshrestha2653 7 жыл бұрын
Thank you! Sir. Can you upload a lecture on Parameter estimation: Batch least square, constrained least square and sequential least square?
@eatctitox
@eatctitox 11 жыл бұрын
Great explanation! I really appreciate you having a practical approach to explain the details... Super like!
@antoniomdn
@antoniomdn 10 жыл бұрын
Thank you very much for the tutorial, it explains it very clearly.
@25aaditya
@25aaditya 7 жыл бұрын
This is so amazing. thank you so much for a clear explanation of LQR
@bruno_sjc_
@bruno_sjc_ 7 жыл бұрын
You rock! What a clear explanation! Thank you very much!
@cendit420
@cendit420 9 жыл бұрын
Thank you for the explanation, very helpful.
@kokorot17
@kokorot17 6 жыл бұрын
Thanks Sir.. you made my life a bit easier!
@giuseppealmontelalli840
@giuseppealmontelalli840 7 жыл бұрын
please make a video on LQG CONTROL
@mohamadmawed6078
@mohamadmawed6078 6 жыл бұрын
Amazing and great explanation sir
@lionconvoy8622
@lionconvoy8622 8 жыл бұрын
very clear explanation! thanks! :)
@Pi314159265ify
@Pi314159265ify 10 жыл бұрын
Great tutorial, thank you very much!
11 жыл бұрын
Gave me help big time with my project topic, thumbs up!
@dmitriymakovkin
@dmitriymakovkin 11 жыл бұрын
Super! Clear & concise, thank you.
@Larantas
@Larantas 7 жыл бұрын
Wonderful. Thank you :)
@BereketAbraham
@BereketAbraham 8 жыл бұрын
This video is amazing! Keep it up!
@Missionary117
@Missionary117 7 жыл бұрын
Wow.. that was very good.
@ahmadfariz7173
@ahmadfariz7173 6 жыл бұрын
Thank you, sir!
@bhaskartushar90
@bhaskartushar90 6 жыл бұрын
Great explained...
@avinashdamera6960
@avinashdamera6960 6 жыл бұрын
thank you sir,
Continuous-time Kalman Filter (Dr. Jake Abbott, University of Utah)
33:45
Why the Riccati Equation Is important for LQR Control
14:30
Spongebob ate Patrick 😱 #meme #spongebob #gmod
00:15
Mr. LoLo
Рет қаралды 17 МЛН
Help Me Celebrate! 😍🙏
00:35
Alan Chikin Chow
Рет қаралды 28 МЛН
Will A Guitar Boat Hold My Weight?
00:20
MrBeast
Рет қаралды 244 МЛН
Apple peeling hack @scottsreality
00:37
_vector_
Рет қаралды 132 МЛН
State Observers (Dr. Jake Abbott, University of Utah)
18:54
JJAbbottatUtah
Рет қаралды 26 М.
Linear Quadratic Regulator (LQR in Optimal Control)
39:27
Engineering Educator Academy
Рет қаралды 884
Jordan Form (Dr. Jake Abbott, University of Utah)
26:42
JJAbbottatUtah
Рет қаралды 52 М.
Overview of LQR for System Control
8:56
Professor Jennifer Hasler's Circuit Lectures
Рет қаралды 2,6 М.
Is Computer Science still worth it?
20:08
NeetCodeIO
Рет қаралды 332 М.
Researchers thought this was a bug (Borwein integrals)
17:26
3Blue1Brown
Рет қаралды 3,5 МЛН
State Feedback (Dr. Jake Abbott, University of Utah)
20:58
JJAbbottatUtah
Рет қаралды 13 М.
Inverted pendulum control, swing up and reference tracking (PID and LQR)
1:07
Frederik Oestergaard
Рет қаралды 12 М.
New Breakthrough on a 90-year-old Telephone Question
28:45
Eric Rowland
Рет қаралды 122 М.
Spongebob ate Patrick 😱 #meme #spongebob #gmod
00:15
Mr. LoLo
Рет қаралды 17 МЛН