Lecture 40(A): Kuhn-Tucker Conditions: Conceptual and geometric insight

  Рет қаралды 59,821

Arizona Math Camp

Arizona Math Camp

Күн бұрын

Пікірлер: 105
@TheLZee
@TheLZee 4 жыл бұрын
Finally someone who explains everything slowly and clearly. This is by far the best KT explanation in youtube
@averilprost9407
@averilprost9407 4 жыл бұрын
It's been 20 minutes and I am still amazed at that calligraphed F
@劉彥均-w6g
@劉彥均-w6g 3 жыл бұрын
This video really saved me... my professor just started calculation without any explanation, so I searched it online and I am here. A conceptual explanation is essential for me before starting calculations!!!
@ArizonaMathCamp
@ArizonaMathCamp 3 жыл бұрын
Yes, intuition is critical. You might like this story about Nobel laureate Richard Feynman: www.u.arizona.edu/~mwalker/501BReadings/FeynmanOnExamples.pdf
@weihaopan
@weihaopan 6 жыл бұрын
Nice explanation of KKT conditions, thanks for your contribution!
@ArizonaMathCamp
@ArizonaMathCamp 6 жыл бұрын
Thanks! More to come.
@dmitrystikheev3384
@dmitrystikheev3384 4 жыл бұрын
Greetings from Moscow! Probably the best introduction to K-T conditions I've listed to so far. Thanks a lot, your contribution is priceless! I wish there were more professors like you in the universities. Even the book on mathematics for economists I'm currently studying does not delve into the geometric interpretation of the considered problems. Your approach builds the intuition, while most of the courses are focused on the mechanics. Thanks again!
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
Thanks for the positive comments. You've described exactly what the lectures are intended to do: to help people understand the concepts by developing the analytical rigor on a foundation of geometrical intuition.
@danielkrupah
@danielkrupah 2 жыл бұрын
Maths needs to be taught by experienced and old professors. This is the best video so far on optimization. I am currently doing Math Camp at one of the universities in the USA; it's just a bomb.
@ArizonaMathCamp
@ArizonaMathCamp 2 жыл бұрын
Who you calling old?? I'm glad this was helpful. Thanks for the positive review
@danielkrupah
@danielkrupah 2 жыл бұрын
@@ArizonaMathCamp Sorry for the choice of words. I mean positive. What I meant to say is that experienced professors with old strong backgrounds.
@ArizonaMathCamp
@ArizonaMathCamp 2 жыл бұрын
@@danielkrupah That's OK, I was just joking. Hey, I *am* old, can't deny it. But I can think young.
@ptitpapillon
@ptitpapillon 5 жыл бұрын
Amazing explanation! There should be more professors like you in the maths departments!
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
Thanks! Glad you liked it.
@faresmeier9596
@faresmeier9596 3 жыл бұрын
yes exactly igo
@orglce13
@orglce13 4 жыл бұрын
William Karush has left the the chat
@halneufmille
@halneufmille 3 жыл бұрын
First I laughed out loud. Then I cried once I realized how nerdy I have become.
@vytran3276
@vytran3276 5 жыл бұрын
This is very clear and easy to understand. Thank you so much!
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
Glad you liked it. Thanks for the good feedback.
@marianavillabona2022
@marianavillabona2022 4 жыл бұрын
One of the best videos i've seen! You're very king for giving us this study material, thank you! From Colombiaaa
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
Colombia! I love Cartagena! One of my best students, 5 or 6 years ago, was from Colombia. Thanks for the positive feedback. Glad the video was helpful.
@purplerain5305
@purplerain5305 4 жыл бұрын
Hello! Can you please help me out? I couldn't catch what the Professor said exactly, at about 3:55: "so that we're on the boundary of, for example, the non-negative quadrant or the non-negative _______" What was in that blank?
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
@@purplerain5305 "... the nonnegative quadrant or the nonnegative orthant."
@purplerain5305
@purplerain5305 4 жыл бұрын
@@ArizonaMathCamp Thankyou so much!
@kirar2004
@kirar2004 2 жыл бұрын
Thanks for this nice geometric explanation. The concept has become very clear to me. Thanks again!
@ArizonaMathCamp
@ArizonaMathCamp 2 жыл бұрын
Glad it was helpful!
@hhht-vc2it
@hhht-vc2it Жыл бұрын
thank you so much for uploading these helpful videos. Much appreciation.
@ArizonaMathCamp
@ArizonaMathCamp Жыл бұрын
Very glad they're helpful. Thanks for the positive feedback.
@Ricatellez682
@Ricatellez682 3 ай бұрын
love that.... ottima spiegazione che ho trovato finora.
@cabdallahahmad7288
@cabdallahahmad7288 2 жыл бұрын
Really u explained the best way which I have not seen before thnks
@ArizonaMathCamp
@ArizonaMathCamp 2 жыл бұрын
I'm glad you found it helpful.
@williamcantin5871
@williamcantin5871 2 жыл бұрын
I had some difficulty to understand it. Wow! Seeing your video really helped me to understand this! Thank you very much sir!
@ArizonaMathCamp
@ArizonaMathCamp 2 жыл бұрын
That's great to hear! Glad it was helpful.
@talshaffar4001
@talshaffar4001 5 жыл бұрын
The beginning of the explanation was very clear -- but I got lost at 10:21 when you started talking about the feasible vectors that satisfy all the constraints. I am not clear how vectors are related. What background am I missing? Thanks for doing this!!!
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
"Vector" is just a synonym for "point." We want to know which are the points (or the *decisions*) that satisfy all the constraints. It's important to keep in mind that the constraint (the line, in the diagram) is just the level curve of a *function*. A point is feasible (i.e., satisfies the constraint) if it lies on the "correct" side of the constraint.
@지배받는지배자
@지배받는지배자 4 жыл бұрын
I appreciate your nice explanation, from S.Korea!
@alexandralaw1476
@alexandralaw1476 4 жыл бұрын
This is literally the best video I have ever seen in this topic. Thank you for refresh my memory on optimisation in such a gentle way, I may use if later in the course of dynamic optimisation in Macroeconomics. (Starting reading Stokey&Lucas Jr. which is so exciting to me!!!!)
@ArizonaMathCamp
@ArizonaMathCamp 3 жыл бұрын
Thanks! Glad you found it helpful! I'm going to do a lecture on a Stokey-Lucas-type example sometime soon.
@alexandralaw1476
@alexandralaw1476 3 жыл бұрын
Thank you so much professor! Can’t wait to see the new upload on recursive topics!
@sl-je5fg
@sl-je5fg 5 жыл бұрын
Thank you very much, you are the best math teacher on KZbin fs
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
Thanks! And you're the most perceptive commenter! ;-)
@ameliali9489
@ameliali9489 3 жыл бұрын
My Maths Econ exam will be tmr and I really wish that I could have had a maths prof like you. Thanks for making effort in doing these videos and I do feel like you are my best maths teacher ever! Thanks Prof Mark.
@ArizonaMathCamp
@ArizonaMathCamp 3 жыл бұрын
Good luck Amelia! Ace that exam!
@ameliali9489
@ameliali9489 3 жыл бұрын
@@ArizonaMathCamp Dear Mark, thank you so much for your encouragement and excellent videos! I will continue to watch them even tho my math econ course is finished. In fact, I was watching them as tv programmes and finished 2 episodes everyday. They gave me hope!
@ArizonaMathCamp
@ArizonaMathCamp 3 жыл бұрын
@@ameliali9489 I'm glad they've been so helpful, Amelia. Good luck!
@skywalk5392
@skywalk5392 4 жыл бұрын
An excellent lecture for understanding KKT condition. Thank you so much for the lecture professor!
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
Glad you liked it. Thanks for the good feedback.
@drushtisawant3284
@drushtisawant3284 3 жыл бұрын
thank you so much sir ,, i was struggling with the conceptual understanding of some theorems of mathematical economics ,, and this has definitely helped ... thank you
@ArizonaMathCamp
@ArizonaMathCamp 3 жыл бұрын
That's great, I'm really glad it's helped.
@philippe177
@philippe177 5 жыл бұрын
Thank you sir. I clearly understood.
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
Thanks! Glad it was helpful.
@ameliali9489
@ameliali9489 3 жыл бұрын
Amazing combination of geometric representation!
@mollyxue8536
@mollyxue8536 2 жыл бұрын
in the second example, is it m>n? adding an additional constrain?
@ArizonaMathCamp
@ArizonaMathCamp 2 жыл бұрын
Yes. At about 18:55 I went from m=n=2 to m=3 and n=2 by adding a third constraint.
@thefiat18
@thefiat18 5 жыл бұрын
@20:33 , m should be greater than n right? but its written m
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
m
@thefiat18
@thefiat18 5 жыл бұрын
@@ArizonaMathCamp Thank you so much Professor for the clarification!
@Thejosiphas
@Thejosiphas 5 жыл бұрын
how have you done this all while writing backwards? i'm shocked and confused
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
Writing on a glass screen, in the normal way, with camera on the other side, and then flipping the resulting video via software.
@qianyue4764
@qianyue4764 4 жыл бұрын
@@ArizonaMathCamp brilliant
@jonatanwestholm
@jonatanwestholm 4 жыл бұрын
If you watch a few of these, you might notice that an unusual rate of the lectures are writing with their "left" hand, and also have wedding rings on their right hands.
@yoli6373
@yoli6373 3 жыл бұрын
One would think he is in the mirror!
@huidezhu7566
@huidezhu7566 4 жыл бұрын
Amazingly Explanation!!! Much thanks!
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
Thanks for the nice comment. I'm glad this was helpful.
@davi37005
@davi37005 4 жыл бұрын
Thank you, professor! You just saved my life 💛
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
Wow, that's great! Maybe I'll retitle the lecture "Kuhn-Tucker saves lives." :) I'm glad it was helpful.
@penarc2784
@penarc2784 3 жыл бұрын
Wonderful class! Thanks. Also, a little question: what's the difference between kt condition and kkt condition?
@ArizonaMathCamp
@ArizonaMathCamp 3 жыл бұрын
Just slightly different names for the same conditions. Karush's independent discovery of the conditions wasn't discovered until decades after KT. It's just habit for some people to say KT instead of KKT, but he really should be included.
@penarc2784
@penarc2784 3 жыл бұрын
@@ArizonaMathCamp Thanks a lot!
@vinseiroja
@vinseiroja 5 жыл бұрын
wonderful class! Thanks
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
Thanks for watching and for the positive feedback.
@xba2007
@xba2007 2 жыл бұрын
Question: it's a detail, but in some cases may be important: why were the inequalities x_i >= 0 not expressed explicitly as G^j(x) constraint functions ? (and instead only implicit into the x \in R^2+ set)
@ArizonaMathCamp
@ArizonaMathCamp 2 жыл бұрын
The formally correct (but conceptually unrevealing) answer is that treating the variables in this distinct-from-the-constraints way when writing the first-order conditions gives you the actual FOC that can be proved to characterize the optimal solution. A better answer follows from the geometry in this lecture: the FOC that you get from doing things this way are the exact analytical description of the relation between the objective-function gradient and the constraint gradients that must hold at an optimal solution. This is why the geometry is so important for understanding the KKT Conditions. You can do things the way you suggest, but then the FOC don't follow so straightforwardly.
@salitherin
@salitherin 4 жыл бұрын
Thank you for this awesome video! I have a follow up question: why is m < n at 20:33? I'm still a little confused. I thought when you added G3 that would be another constraint and we have 3 of them (G1, G2, and G3) and we have two variables (x1, and x1). Am I missing something?
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
You're right, there are 3 constraints. When I wrote m < n a minute or so earlier, it was to point out that with equation constraints we have to have m n) and to show that with inequality constraints we *don't* have to have m < n.
@salitherin
@salitherin 4 жыл бұрын
@@ArizonaMathCamp Thank you so much for the reply :)
@nemathassnain8522
@nemathassnain8522 4 жыл бұрын
Amazing as always!
@googlelee7197
@googlelee7197 4 жыл бұрын
I have a question. How to know the gradient vector's direction in constraints?
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
That's an important question. The gradient is the vector of partial derivatives of the constraint function. Those derivatives, at the point in question, tell you the direction and length of the gradient vector. For example, for the constraint 2x_1 + 3x_2 = 12, the gradient is the vector (2,3) -- the same at every point, because the constraint is linear. For many of the constraints I've drawn in these lectures, I just have in mind the direction I want the gradient to point -- I haven't specified a particular function for the constraint, so you can make up your own function to correspond to what I've drawn.
@sbasu31ag
@sbasu31ag 5 жыл бұрын
Why is there a gradient vector for the cost function at the optimal point? Aren't gradients supposed to be zero at the points where the maximum value is attained?
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
I'm a little unclear about your question. There is no cost function in this video. We're *mazimizing* the function f, so it's presumably not a cost function (we wouldn't be trying to maximize our cost). So let's say instead that f is a profit function that we're trying to maximize. Now you would be correct that the gradient should be the zero vector at a point that maximizes f -- *if* there are no constraints. But the essence of Kuhn-Tucker is to deal with constraints. Typically in a constrained problem the optimum value of the objective function *subject to a constraint* will be less than it could have been without the constraint -- in other words, the objective function can be increased if we're just considering the objective function alone (i.e., its gradient is *not* the zero vector). This is the central fact in constrained optimization.
@sbasu31ag
@sbasu31ag 5 жыл бұрын
@@ArizonaMathCamp Thanks that cleared it up. And yes a profit function or more generally, an objective function would be the right term and I didn't pay much attention to it while typing. Sorry about that. What you're saying is that in constrained optimisations we might not always get the local optima but instead a value close enough that also lies in the feasible set, at which the gradient vector is still non-zero and pointing to the greater ascent. And this hold true for any number of variables and whether or not we have only equality constraints or both equality and inequality constraints.
@cauchyschwarz3295
@cauchyschwarz3295 3 жыл бұрын
Someone understood why the gradients of the condition functions G1,G2 need to be linearly independent? Suppose G1=G2 and one inequality would be G2 >= b2. Wouldn't the gradiants be dependent then?
@ArizonaMathCamp
@ArizonaMathCamp 3 жыл бұрын
I don't understand your question. Of course if the two gradients are the same then they're linearly dependent. However, if the *values* of the functions G1 and G2 are the same, that doesn't tell us anything.
@amiraazil4454
@amiraazil4454 5 жыл бұрын
thank u so much
@markuswerner7271
@markuswerner7271 5 жыл бұрын
Are there for alpha always to solutions to put in e. G. Alpha =0 and alpha bigger 0 in the lagrange function?
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
I don't understand your question. If you can write it more carefully I'll try to give you an answer.
@markuswerner7271
@markuswerner7271 5 жыл бұрын
@@ArizonaMathCamp we had an example of building the first derivation of the lagrange function and tried to find out the point but we had the Probleme, that there was still alpha in the derivation, however we had 2 cases for alpha is 0 and alpha bigger 0, why is alpha 0 and bigger 0? (sry for bad English, I am from Germany 😅, mathematical terms are still complex)
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
@@markuswerner7271 I assume you're referring to the example in Lecture 40A, where n=2 and m=3 (3 constriants), and that the "alphas" you're referring to are the three lambdas, the multipliers. I wasn't actually building the Lagrangian function here (although it *is* related to the Lagrangian function); I was simply demonstrating the relation among the gradients: that the objective gradient will be a nonnegative linear combination of the constraint gradients (evaluating all gradients at the solution vector). Every lambda will be nonnegative: the lambda has to be zero for a non-binding constraint; and for a binding constraint it can be positive or zero (in the example, both were positive). This is also described in Lecture 40B, from about 11:40 to 15:45. A zero multiplier for a binding constraint is shown in Lecture 40C from 12:25 to 17:20. An important feature of this video is that everything can be interpreted both geometrically and also algebraically/symbolically. It's very important to understand this parallel between the geomatry and the algebra.
@markuswerner7271
@markuswerner7271 5 жыл бұрын
@@ArizonaMathCamp OK thanks still trying to understand 😂, alpha and beta were the lagrange multiplicators, that's right
@caio868
@caio868 4 жыл бұрын
Great video! I'm starting a Ph.D. in Economics at a top-ranked school and I was using Simon & Blume textbook (great textbook), but the intuition is much better in your video. Also, I have a question: I am also using Michael Carter's Fundamentals of Mathematical Economics, which is a little bit more rigorous than the standard Simon&Blume textbook (and more rigorous than Sydsaeter's Further Mathematics for Econ). Since the sequence of Micro, Macro, and Econometrics are all based on proof-based mathematics which requires some knowledge of metric spaces, topology, and measure theory, then why aren't Math Camps being taught in the Michael Carter's level? I'm worried that only studying by Simon & Blume (or even your amazing videos) will not prepare me for the high-level math encountered in those course sequences. Thank you!
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
You're right that one needs somewhat more than this. Two reasons math camps don't do more: they're short; and you don't really need *much* more unless you specialize in theory or econometrics. I'm not familiar with the Carter book. At UA we teach a semester course that comes after math camp; my notes from the years when I taught it are here: www.u.arizona.edu/~mwalker/econ519/519LectureNotes.htm
@olivertseng8466
@olivertseng8466 4 жыл бұрын
Thank you professor!
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
You're welcome!
@chiragraju821
@chiragraju821 3 жыл бұрын
Wish me luck for my Optimization quiz on Tuesday
@ArizonaMathCamp
@ArizonaMathCamp 3 жыл бұрын
OK ... crush it!!
@nickey0207
@nickey0207 4 жыл бұрын
For whom is confused why lambda_3 = 0. Complementary slackness holds for the dual optimal when strong duality holds, thus lambda_3 = 0 for G_3(x_hat) less than 0. (fixed my typo, haha)
@ArizonaMathCamp
@ArizonaMathCamp 4 жыл бұрын
You are right when the RHS (b_3) is zero. More generally, lambda_3 will be zero if G_3(x_hat) is strictly less than b_3 -- i.e., if there is slack in this constraint. (It's not necessary to use duality here.)
@nickey0207
@nickey0207 4 жыл бұрын
@@ArizonaMathCamp Thank you for your hard-working, Prof. Walker.
@ohad157
@ohad157 4 жыл бұрын
Amazing
@pulltheskymusicgroup4475
@pulltheskymusicgroup4475 3 жыл бұрын
🇹🇿🇹🇿🇹🇿🇹🇿🇹🇿
@wanjadouglas3058
@wanjadouglas3058 5 жыл бұрын
I got so lost all along
@ArizonaMathCamp
@ArizonaMathCamp 5 жыл бұрын
I'm sorry to hear that. At what part(s) of the video did you have trouble?
@wanjadouglas3058
@wanjadouglas3058 5 жыл бұрын
@@ArizonaMathCamp I think it's the way you drew the constraints into a space. I could understand the convergence but I got lost when you introduced the gradients. Plus I guess I should have had a stronger foundation of of KKT and plotting of the constraints....am more into to calculations than thinking of dimensional spaces.
Lecture 40(B): The Kuhn-Tucker Conditions and Theorem
23:37
Arizona Math Camp
Рет қаралды 19 М.
Lecture 40(C): The Kuhn-Tucker Conditions: An Example
19:27
Arizona Math Camp
Рет қаралды 10 М.
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН
Cheerleader Transformation That Left Everyone Speechless! #shorts
00:27
Fabiosa Best Lifehacks
Рет қаралды 16 МЛН
Lagrange Multipliers | Geometric Meaning & Full Example
12:24
Dr. Trefor Bazett
Рет қаралды 324 М.
Lecture 1: Introduction to 14.02 Principles of Macroeconomics
29:30
MIT OpenCourseWare
Рет қаралды 247 М.
Duality: Lagrangian and dual problem
13:50
Michel Bierlaire
Рет қаралды 74 М.
Proof Chess.com Has a Cheating Crisis.
24:09
jacksark
Рет қаралды 136 М.
Гениальное изобретение из обычного стаканчика!
00:31
Лютая физика | Олимпиадная физика
Рет қаралды 4,8 МЛН