Finally someone who explains everything slowly and clearly. This is by far the best KT explanation in youtube
@averilprost94074 жыл бұрын
It's been 20 minutes and I am still amazed at that calligraphed F
@劉彥均-w6g3 жыл бұрын
This video really saved me... my professor just started calculation without any explanation, so I searched it online and I am here. A conceptual explanation is essential for me before starting calculations!!!
@ArizonaMathCamp3 жыл бұрын
Yes, intuition is critical. You might like this story about Nobel laureate Richard Feynman: www.u.arizona.edu/~mwalker/501BReadings/FeynmanOnExamples.pdf
@weihaopan6 жыл бұрын
Nice explanation of KKT conditions, thanks for your contribution!
@ArizonaMathCamp6 жыл бұрын
Thanks! More to come.
@dmitrystikheev33844 жыл бұрын
Greetings from Moscow! Probably the best introduction to K-T conditions I've listed to so far. Thanks a lot, your contribution is priceless! I wish there were more professors like you in the universities. Even the book on mathematics for economists I'm currently studying does not delve into the geometric interpretation of the considered problems. Your approach builds the intuition, while most of the courses are focused on the mechanics. Thanks again!
@ArizonaMathCamp4 жыл бұрын
Thanks for the positive comments. You've described exactly what the lectures are intended to do: to help people understand the concepts by developing the analytical rigor on a foundation of geometrical intuition.
@danielkrupah2 жыл бұрын
Maths needs to be taught by experienced and old professors. This is the best video so far on optimization. I am currently doing Math Camp at one of the universities in the USA; it's just a bomb.
@ArizonaMathCamp2 жыл бұрын
Who you calling old?? I'm glad this was helpful. Thanks for the positive review
@danielkrupah2 жыл бұрын
@@ArizonaMathCamp Sorry for the choice of words. I mean positive. What I meant to say is that experienced professors with old strong backgrounds.
@ArizonaMathCamp2 жыл бұрын
@@danielkrupah That's OK, I was just joking. Hey, I *am* old, can't deny it. But I can think young.
@ptitpapillon5 жыл бұрын
Amazing explanation! There should be more professors like you in the maths departments!
@ArizonaMathCamp5 жыл бұрын
Thanks! Glad you liked it.
@faresmeier95963 жыл бұрын
yes exactly igo
@orglce134 жыл бұрын
William Karush has left the the chat
@halneufmille3 жыл бұрын
First I laughed out loud. Then I cried once I realized how nerdy I have become.
@vytran32765 жыл бұрын
This is very clear and easy to understand. Thank you so much!
@ArizonaMathCamp5 жыл бұрын
Glad you liked it. Thanks for the good feedback.
@marianavillabona20224 жыл бұрын
One of the best videos i've seen! You're very king for giving us this study material, thank you! From Colombiaaa
@ArizonaMathCamp4 жыл бұрын
Colombia! I love Cartagena! One of my best students, 5 or 6 years ago, was from Colombia. Thanks for the positive feedback. Glad the video was helpful.
@purplerain53054 жыл бұрын
Hello! Can you please help me out? I couldn't catch what the Professor said exactly, at about 3:55: "so that we're on the boundary of, for example, the non-negative quadrant or the non-negative _______" What was in that blank?
@ArizonaMathCamp4 жыл бұрын
@@purplerain5305 "... the nonnegative quadrant or the nonnegative orthant."
@purplerain53054 жыл бұрын
@@ArizonaMathCamp Thankyou so much!
@kirar20042 жыл бұрын
Thanks for this nice geometric explanation. The concept has become very clear to me. Thanks again!
@ArizonaMathCamp2 жыл бұрын
Glad it was helpful!
@hhht-vc2it Жыл бұрын
thank you so much for uploading these helpful videos. Much appreciation.
@ArizonaMathCamp Жыл бұрын
Very glad they're helpful. Thanks for the positive feedback.
@Ricatellez6823 ай бұрын
love that.... ottima spiegazione che ho trovato finora.
@cabdallahahmad72882 жыл бұрын
Really u explained the best way which I have not seen before thnks
@ArizonaMathCamp2 жыл бұрын
I'm glad you found it helpful.
@williamcantin58712 жыл бұрын
I had some difficulty to understand it. Wow! Seeing your video really helped me to understand this! Thank you very much sir!
@ArizonaMathCamp2 жыл бұрын
That's great to hear! Glad it was helpful.
@talshaffar40015 жыл бұрын
The beginning of the explanation was very clear -- but I got lost at 10:21 when you started talking about the feasible vectors that satisfy all the constraints. I am not clear how vectors are related. What background am I missing? Thanks for doing this!!!
@ArizonaMathCamp5 жыл бұрын
"Vector" is just a synonym for "point." We want to know which are the points (or the *decisions*) that satisfy all the constraints. It's important to keep in mind that the constraint (the line, in the diagram) is just the level curve of a *function*. A point is feasible (i.e., satisfies the constraint) if it lies on the "correct" side of the constraint.
@지배받는지배자4 жыл бұрын
I appreciate your nice explanation, from S.Korea!
@alexandralaw14764 жыл бұрын
This is literally the best video I have ever seen in this topic. Thank you for refresh my memory on optimisation in such a gentle way, I may use if later in the course of dynamic optimisation in Macroeconomics. (Starting reading Stokey&Lucas Jr. which is so exciting to me!!!!)
@ArizonaMathCamp3 жыл бұрын
Thanks! Glad you found it helpful! I'm going to do a lecture on a Stokey-Lucas-type example sometime soon.
@alexandralaw14763 жыл бұрын
Thank you so much professor! Can’t wait to see the new upload on recursive topics!
@sl-je5fg5 жыл бұрын
Thank you very much, you are the best math teacher on KZbin fs
@ArizonaMathCamp5 жыл бұрын
Thanks! And you're the most perceptive commenter! ;-)
@ameliali94893 жыл бұрын
My Maths Econ exam will be tmr and I really wish that I could have had a maths prof like you. Thanks for making effort in doing these videos and I do feel like you are my best maths teacher ever! Thanks Prof Mark.
@ArizonaMathCamp3 жыл бұрын
Good luck Amelia! Ace that exam!
@ameliali94893 жыл бұрын
@@ArizonaMathCamp Dear Mark, thank you so much for your encouragement and excellent videos! I will continue to watch them even tho my math econ course is finished. In fact, I was watching them as tv programmes and finished 2 episodes everyday. They gave me hope!
@ArizonaMathCamp3 жыл бұрын
@@ameliali9489 I'm glad they've been so helpful, Amelia. Good luck!
@skywalk53924 жыл бұрын
An excellent lecture for understanding KKT condition. Thank you so much for the lecture professor!
@ArizonaMathCamp4 жыл бұрын
Glad you liked it. Thanks for the good feedback.
@drushtisawant32843 жыл бұрын
thank you so much sir ,, i was struggling with the conceptual understanding of some theorems of mathematical economics ,, and this has definitely helped ... thank you
@ArizonaMathCamp3 жыл бұрын
That's great, I'm really glad it's helped.
@philippe1775 жыл бұрын
Thank you sir. I clearly understood.
@ArizonaMathCamp5 жыл бұрын
Thanks! Glad it was helpful.
@ameliali94893 жыл бұрын
Amazing combination of geometric representation!
@mollyxue85362 жыл бұрын
in the second example, is it m>n? adding an additional constrain?
@ArizonaMathCamp2 жыл бұрын
Yes. At about 18:55 I went from m=n=2 to m=3 and n=2 by adding a third constraint.
@thefiat185 жыл бұрын
@20:33 , m should be greater than n right? but its written m
@ArizonaMathCamp5 жыл бұрын
m
@thefiat185 жыл бұрын
@@ArizonaMathCamp Thank you so much Professor for the clarification!
@Thejosiphas5 жыл бұрын
how have you done this all while writing backwards? i'm shocked and confused
@ArizonaMathCamp5 жыл бұрын
Writing on a glass screen, in the normal way, with camera on the other side, and then flipping the resulting video via software.
@qianyue47644 жыл бұрын
@@ArizonaMathCamp brilliant
@jonatanwestholm4 жыл бұрын
If you watch a few of these, you might notice that an unusual rate of the lectures are writing with their "left" hand, and also have wedding rings on their right hands.
@yoli63733 жыл бұрын
One would think he is in the mirror!
@huidezhu75664 жыл бұрын
Amazingly Explanation!!! Much thanks!
@ArizonaMathCamp4 жыл бұрын
Thanks for the nice comment. I'm glad this was helpful.
@davi370054 жыл бұрын
Thank you, professor! You just saved my life 💛
@ArizonaMathCamp4 жыл бұрын
Wow, that's great! Maybe I'll retitle the lecture "Kuhn-Tucker saves lives." :) I'm glad it was helpful.
@penarc27843 жыл бұрын
Wonderful class! Thanks. Also, a little question: what's the difference between kt condition and kkt condition?
@ArizonaMathCamp3 жыл бұрын
Just slightly different names for the same conditions. Karush's independent discovery of the conditions wasn't discovered until decades after KT. It's just habit for some people to say KT instead of KKT, but he really should be included.
@penarc27843 жыл бұрын
@@ArizonaMathCamp Thanks a lot!
@vinseiroja5 жыл бұрын
wonderful class! Thanks
@ArizonaMathCamp5 жыл бұрын
Thanks for watching and for the positive feedback.
@xba20072 жыл бұрын
Question: it's a detail, but in some cases may be important: why were the inequalities x_i >= 0 not expressed explicitly as G^j(x) constraint functions ? (and instead only implicit into the x \in R^2+ set)
@ArizonaMathCamp2 жыл бұрын
The formally correct (but conceptually unrevealing) answer is that treating the variables in this distinct-from-the-constraints way when writing the first-order conditions gives you the actual FOC that can be proved to characterize the optimal solution. A better answer follows from the geometry in this lecture: the FOC that you get from doing things this way are the exact analytical description of the relation between the objective-function gradient and the constraint gradients that must hold at an optimal solution. This is why the geometry is so important for understanding the KKT Conditions. You can do things the way you suggest, but then the FOC don't follow so straightforwardly.
@salitherin4 жыл бұрын
Thank you for this awesome video! I have a follow up question: why is m < n at 20:33? I'm still a little confused. I thought when you added G3 that would be another constraint and we have 3 of them (G1, G2, and G3) and we have two variables (x1, and x1). Am I missing something?
@ArizonaMathCamp4 жыл бұрын
You're right, there are 3 constraints. When I wrote m < n a minute or so earlier, it was to point out that with equation constraints we have to have m n) and to show that with inequality constraints we *don't* have to have m < n.
@salitherin4 жыл бұрын
@@ArizonaMathCamp Thank you so much for the reply :)
@nemathassnain85224 жыл бұрын
Amazing as always!
@googlelee71974 жыл бұрын
I have a question. How to know the gradient vector's direction in constraints?
@ArizonaMathCamp4 жыл бұрын
That's an important question. The gradient is the vector of partial derivatives of the constraint function. Those derivatives, at the point in question, tell you the direction and length of the gradient vector. For example, for the constraint 2x_1 + 3x_2 = 12, the gradient is the vector (2,3) -- the same at every point, because the constraint is linear. For many of the constraints I've drawn in these lectures, I just have in mind the direction I want the gradient to point -- I haven't specified a particular function for the constraint, so you can make up your own function to correspond to what I've drawn.
@sbasu31ag5 жыл бұрын
Why is there a gradient vector for the cost function at the optimal point? Aren't gradients supposed to be zero at the points where the maximum value is attained?
@ArizonaMathCamp5 жыл бұрын
I'm a little unclear about your question. There is no cost function in this video. We're *mazimizing* the function f, so it's presumably not a cost function (we wouldn't be trying to maximize our cost). So let's say instead that f is a profit function that we're trying to maximize. Now you would be correct that the gradient should be the zero vector at a point that maximizes f -- *if* there are no constraints. But the essence of Kuhn-Tucker is to deal with constraints. Typically in a constrained problem the optimum value of the objective function *subject to a constraint* will be less than it could have been without the constraint -- in other words, the objective function can be increased if we're just considering the objective function alone (i.e., its gradient is *not* the zero vector). This is the central fact in constrained optimization.
@sbasu31ag5 жыл бұрын
@@ArizonaMathCamp Thanks that cleared it up. And yes a profit function or more generally, an objective function would be the right term and I didn't pay much attention to it while typing. Sorry about that. What you're saying is that in constrained optimisations we might not always get the local optima but instead a value close enough that also lies in the feasible set, at which the gradient vector is still non-zero and pointing to the greater ascent. And this hold true for any number of variables and whether or not we have only equality constraints or both equality and inequality constraints.
@cauchyschwarz32953 жыл бұрын
Someone understood why the gradients of the condition functions G1,G2 need to be linearly independent? Suppose G1=G2 and one inequality would be G2 >= b2. Wouldn't the gradiants be dependent then?
@ArizonaMathCamp3 жыл бұрын
I don't understand your question. Of course if the two gradients are the same then they're linearly dependent. However, if the *values* of the functions G1 and G2 are the same, that doesn't tell us anything.
@amiraazil44545 жыл бұрын
thank u so much
@markuswerner72715 жыл бұрын
Are there for alpha always to solutions to put in e. G. Alpha =0 and alpha bigger 0 in the lagrange function?
@ArizonaMathCamp5 жыл бұрын
I don't understand your question. If you can write it more carefully I'll try to give you an answer.
@markuswerner72715 жыл бұрын
@@ArizonaMathCamp we had an example of building the first derivation of the lagrange function and tried to find out the point but we had the Probleme, that there was still alpha in the derivation, however we had 2 cases for alpha is 0 and alpha bigger 0, why is alpha 0 and bigger 0? (sry for bad English, I am from Germany 😅, mathematical terms are still complex)
@ArizonaMathCamp5 жыл бұрын
@@markuswerner7271 I assume you're referring to the example in Lecture 40A, where n=2 and m=3 (3 constriants), and that the "alphas" you're referring to are the three lambdas, the multipliers. I wasn't actually building the Lagrangian function here (although it *is* related to the Lagrangian function); I was simply demonstrating the relation among the gradients: that the objective gradient will be a nonnegative linear combination of the constraint gradients (evaluating all gradients at the solution vector). Every lambda will be nonnegative: the lambda has to be zero for a non-binding constraint; and for a binding constraint it can be positive or zero (in the example, both were positive). This is also described in Lecture 40B, from about 11:40 to 15:45. A zero multiplier for a binding constraint is shown in Lecture 40C from 12:25 to 17:20. An important feature of this video is that everything can be interpreted both geometrically and also algebraically/symbolically. It's very important to understand this parallel between the geomatry and the algebra.
@markuswerner72715 жыл бұрын
@@ArizonaMathCamp OK thanks still trying to understand 😂, alpha and beta were the lagrange multiplicators, that's right
@caio8684 жыл бұрын
Great video! I'm starting a Ph.D. in Economics at a top-ranked school and I was using Simon & Blume textbook (great textbook), but the intuition is much better in your video. Also, I have a question: I am also using Michael Carter's Fundamentals of Mathematical Economics, which is a little bit more rigorous than the standard Simon&Blume textbook (and more rigorous than Sydsaeter's Further Mathematics for Econ). Since the sequence of Micro, Macro, and Econometrics are all based on proof-based mathematics which requires some knowledge of metric spaces, topology, and measure theory, then why aren't Math Camps being taught in the Michael Carter's level? I'm worried that only studying by Simon & Blume (or even your amazing videos) will not prepare me for the high-level math encountered in those course sequences. Thank you!
@ArizonaMathCamp4 жыл бұрын
You're right that one needs somewhat more than this. Two reasons math camps don't do more: they're short; and you don't really need *much* more unless you specialize in theory or econometrics. I'm not familiar with the Carter book. At UA we teach a semester course that comes after math camp; my notes from the years when I taught it are here: www.u.arizona.edu/~mwalker/econ519/519LectureNotes.htm
@olivertseng84664 жыл бұрын
Thank you professor!
@ArizonaMathCamp4 жыл бұрын
You're welcome!
@chiragraju8213 жыл бұрын
Wish me luck for my Optimization quiz on Tuesday
@ArizonaMathCamp3 жыл бұрын
OK ... crush it!!
@nickey02074 жыл бұрын
For whom is confused why lambda_3 = 0. Complementary slackness holds for the dual optimal when strong duality holds, thus lambda_3 = 0 for G_3(x_hat) less than 0. (fixed my typo, haha)
@ArizonaMathCamp4 жыл бұрын
You are right when the RHS (b_3) is zero. More generally, lambda_3 will be zero if G_3(x_hat) is strictly less than b_3 -- i.e., if there is slack in this constraint. (It's not necessary to use duality here.)
@nickey02074 жыл бұрын
@@ArizonaMathCamp Thank you for your hard-working, Prof. Walker.
@ohad1574 жыл бұрын
Amazing
@pulltheskymusicgroup44753 жыл бұрын
🇹🇿🇹🇿🇹🇿🇹🇿🇹🇿
@wanjadouglas30585 жыл бұрын
I got so lost all along
@ArizonaMathCamp5 жыл бұрын
I'm sorry to hear that. At what part(s) of the video did you have trouble?
@wanjadouglas30585 жыл бұрын
@@ArizonaMathCamp I think it's the way you drew the constraints into a space. I could understand the convergence but I got lost when you introduced the gradients. Plus I guess I should have had a stronger foundation of of KKT and plotting of the constraints....am more into to calculations than thinking of dimensional spaces.