lol, I went to 3Blue1Brown to see if Grant had any videos explaining what the langrange multipliar and lagrangians are.... seeing not I head over to Khan Academy... and Grant is teaching the lesson
@Jurgan64 жыл бұрын
Two years later, I did the exact same thing.
@brandontay20533 жыл бұрын
@@Jurgan6 2 months later, here I am, having done the same thing :')
@morancium3 жыл бұрын
@@brandontay2053 2 weeks later, mee too!!
@YashPatel-vt8or3 жыл бұрын
me too
@morancium3 жыл бұрын
@@YashPatel-vt8or which college bro 😂
@sjwang38923 жыл бұрын
Went from Constrained Optimization Introduction to this video. Absolutely love the clear explanation w/ the graphs! No idea why other materials have to make it so hard to understand.
@technosapien3309 ай бұрын
My theory is they either don't actually understand the topic, or they are gate-keeping
@masterchief86464 жыл бұрын
good Lord this video brought so much understanding to the LaGrange multiplier it´s insane. God bless you Sir
@dirkjensen9352 жыл бұрын
Needed to pick up some basic know how about lagrangian in order to work through a proof regarding partition functions. And I was worried it was going to take me forever considering other texts I have aren't particularly clear and I didn't take lagrangian in undergrad. But oh my, this series is short, snappy, to the point and intuitive. Your tutorials are timeless and a gift to humanity. Thank you.
@tunim43548 жыл бұрын
This is important in economics. One of the major concepts in Real business cycle.
@jairjuliocc4 жыл бұрын
I know im a little late but, Can you explain more?
@tunim43544 жыл бұрын
@@jairjuliocc I was talking about the famous Real Business Cycle model in macroeconomics. When you are working with factors of production like labor and capital and you need Utility maximisation in a single period RBC model. The first order condition equations for capital and labor need a lagrange multiplier. If you are not a student of finance and economics, these will go over your head. And if you have studied macroeco, then these will be the most basic thing you learn.
@Leo-tf3rw3 жыл бұрын
@@tunim4354 wow you replied after 4 years
@hbbh3 жыл бұрын
@@Leo-tf3rw AHAHHAHAH he did
@hbbh3 жыл бұрын
That person probably already finished college
@leeris194 ай бұрын
Last time I checked I am studying how to minimize Optimum Margin Classifier for Support Vectors, now I am here, I don't know how, but I love it.
@mehdij94946 жыл бұрын
I knew Lagrange Optimization since long time. But NOW I can claim understand it perfectly! Thank you so much!
@DefinitelyNotNhanTho6 жыл бұрын
9:50 I believe what you meant was “let’s pause and ponder...” right ? Yeah, you can’t fool us, we know it was you lecturing, 3Blue1Brown.
@phil97n Жыл бұрын
Many thanks! I learned about lagrange multipliers as of yesterday, but it's been rather difficult to understand just exactly what it is even thought the math makes sense - your video clarified for me. Thanks again
@rikenm8 жыл бұрын
It's a good refresher. Thanks. I would like to request you for advance math courses. You are very good at teaching. I watched your linear algebra playlist and also subscribed to your youtube (3Blue1Brown). It's awesome: How about abstract algebra, or even number theory. Thanks
@justinward36798 жыл бұрын
Riken Maharjan I second this!
@zes72156 жыл бұрын
no such thing as gx or not
@ThePiMan09032 жыл бұрын
Thank you Khan Academy!
@SuperIdiotMan007 жыл бұрын
"Hours of Labor and Tons of Steel". That sounds like a rejected thrash metal album.
@christianaustin7826 жыл бұрын
SuperIdiotMan00 that honestly made me laugh out loud
@alekseivoronov37635 жыл бұрын
man, thats what i call a joke
@Ferdinaand5 жыл бұрын
lmaooo
@pritomroy24654 жыл бұрын
9:31 I thought most of the things in math comes from nowhere until I got your videos.
@alexanderherbertkurz6 жыл бұрын
thanks a lot, great video ... I watched a few videos on Lagrange multipliers and this is the best so far ... it would be great if there were links to the previous and next video in the series
@whetstoneguy67173 жыл бұрын
I totally agree.
@Skandalos Жыл бұрын
The voice sounds familiar. Is this the guy from the 3blue1brown channel? Anyway, this is very well explained.
@dionsilverman41954 жыл бұрын
How do we know that when the gradients are parallel, it's an extremum of the constraint g(x,y), rather than an inflection point? For example, extremising the paraboloid f(x,y) = x² +y² subject to y = 2x³ + 1. The gradients are parallel at (0,1), but this does not extremise the function f subject to the constraint g(x,y). Also, can I request a video on Lagrange multipliers with multiple constraints? This is much harder to find. I'm particularly interested in its use in deriving the Boltzmann distribution as maximising the number of micro states subject to constant molecule number and total energy. Also, a video on how this relates to Lagrangian or Hamiltonian mechanics would be fantastic and a common application I think.
@abdullaalmosalami3 жыл бұрын
Woah what! I was not expecting that lambda had some meaning! Oh why didn't my Calc 3 classes show me this. I don't even believe this was in my Calc 3 textbook, or maybe perhaps it was burried in some of the problems at the end of the Lagrange Multiplier section.
@michaeljpchen64697 жыл бұрын
Really helpful to help me get a thorough understanding
@franks.65476 жыл бұрын
Wouldn't we suspect, just from looking at the parallel gradients of R and B, that for every small increase of B you get λ times an increase of R? I mean something like λ = |grad R|/|grad B| = dR/dB on a curve perpendicular to the two tangent contour lines -same as Anton Geraschenko says below, but more visually intuitive, I think. (I admit that you still have to believe that any variation of h and s should be along that perpendicular curve, but that is how you keep R and B contours tangent to each other)
@uvenga2 жыл бұрын
The one who can not learn is because he doesn't want 💯
@umountable4 жыл бұрын
how to find "the previous video" there is no playlist linked to the video
@kawhiknot10165 жыл бұрын
In what playlist does constraint programming topics it belongs?
@xiaoweidu46674 жыл бұрын
this is fantastic point !
@liabraga46417 жыл бұрын
So elucidating
@arslanhojiyev59964 жыл бұрын
If does not ask for the maximum ( or minimum), how can you know it is indeed the maximum (or minimum) value???
@supreme84x5 жыл бұрын
Wouldn't the contour of B be pointed down,, from the concavity? Or is the multiplier acting as a "negative" scalar, flipping it around?
@atriagotler3 жыл бұрын
I love you grant.
@abhishek_sengupta4 жыл бұрын
wow...Thanx a lot!!
@Rockyzach88 Жыл бұрын
So is the lagrange multiplier also considered an eigenvalue?
@indranilroy6915 жыл бұрын
At 4:30, why we are taking gradient of L(Lagrangian function) = 0? Can anyone please put some light on this. Thanks!
@dsanjoy5 жыл бұрын
In a previous video it has been explained. You have to calculate the tangent of the two function and they have to be proportional to each other. The propionality constant is lambda.
@RajatGoel16 жыл бұрын
6:47 REALLY!!!
@animeshpuzari82355 жыл бұрын
thanks😁🏅
@miguelangelhernandezortiz73032 жыл бұрын
Anybody knows a book of Multivariable Calculus' history? Please help me.
@usamsersultanov6897 жыл бұрын
Finally I got it
@CalleTful2 жыл бұрын
Which playlist is this in?
@johncharles39074 жыл бұрын
I think I need some more animations to understand this.
@Postermaestro7 жыл бұрын
Commenting to spread on the tubes!
@sam43958 жыл бұрын
oh
@dagia32095 жыл бұрын
I like it
@Majestic4695 жыл бұрын
Why can’t you just solve for h or s in one function and substitute that expression in the other function? Then you can just set the derivative to 0 to find the optimization.
@MayankGoel4472 жыл бұрын
That's not always possible. If say, your constraint function was not factorizable e.g. xsin(y) + yx^2=1. In this case, you can't express x in terms of y or the other way around and substitute that in f(x)
@anas.2k8663 жыл бұрын
I don't see why the two fradient are propotinal
@hectorbetancourt28543 жыл бұрын
Because you made them so through the Lagrange multiplier. There are multiple grads of the contour of the function that don't have a proportional grad with the constraint, but by assuming that they are (and that they relate to each other through the Lagrande multiplier), you can solve the system of equations and get all the points at which your previous assumption, that the two gradients are proportional, is true.
@elgodyr26832 жыл бұрын
guys can the lamda be equal to 0 ?
@goclbert2 жыл бұрын
Yeah but wouldn't that just mean our constraint has no impact on our ability to optimize R?
@theacademyofgermanidealism62104 жыл бұрын
3blue one brown guy
@buh3573 жыл бұрын
F**k, This is GOLD.
@tag_of_frank4 жыл бұрын
Argh why no inequality constraints
@mv3845 Жыл бұрын
❤
@Jmtri74 жыл бұрын
While you maximize your revenue, I'll be maximizing my profit... ;)
@jullevv8 жыл бұрын
first like
@tadasvaitkevicius57995 жыл бұрын
i dont like that he speaks so fast
@alaypal74846 жыл бұрын
Omg
@shellycollorone37038 жыл бұрын
why sending math i'm not needing ?
@zayedalsuwaidi76978 жыл бұрын
Maybe just don't click videos you do not need to see?
@shellycollorone37038 жыл бұрын
iii 3xki they sending a different kind of math i don't need.
@zayedalsuwaidi76978 жыл бұрын
Shaelyne Collorone Okay, I understand this. But why don't you just go on the website www.khanacademy.com and look for what you need instead of clicking on videos you don't want to see?