Lecture 14 | Lagrange Dual Function | Convex Optimization by Dr. Ahmad Bazzi

  Рет қаралды 35,552

Ahmad Bazzi

Ahmad Bazzi

Күн бұрын

Пікірлер: 30
@AhmadBazzi
@AhmadBazzi 5 жыл бұрын
Please subscribe to the channel to receive future notifications about the channel 🙏🏻 🙏🏻 🙏🏻
@jeanishimwe2120
@jeanishimwe2120 5 жыл бұрын
Thanks Dr. You brought it on the right time. It helps understanding duality.
@AhmadBazzi
@AhmadBazzi 4 жыл бұрын
you are most welcome :)
@aleocho774
@aleocho774 3 жыл бұрын
@@AhmadBazzi thanks a lot sir
@hbasu8779
@hbasu8779 2 жыл бұрын
In two-way partitioning example problem at @24:25, when W+V is not PSD, then how do you get the cost -infinity? I am asking this because x can only take values +1 or -1. If you say that when a constrained optimization becomes an unconstrained optimization problem, thanks to the Lagrange function, x does not come from domain D of feasible space of the original problem (which in this case takes on +1 or -1 values) but comes from euclidean space R^n, then isn't the domain of Lagrange function @3.38 wrong? if dom L(x,, : , :) = D * R^m* R^p then in the two-way partitioning example the cost cannot be inifnity,
@SaikSaketh
@SaikSaketh 5 жыл бұрын
Thanks. Right on time matching lectures at University. Thanks
@AhmadBazzi
@AhmadBazzi 5 жыл бұрын
You are most welcome !!
@iamdrfly
@iamdrfly Жыл бұрын
at 31:52 should it be sup{y^T*x - ||x||} instead ... + ||x||?
@positivobro8544
@positivobro8544 2 жыл бұрын
At 20:15 , isn't the condition actually that c-lambda-A^T v >=0 ? Because if all values of the gradient of the lagrangian are positive and your feasible x's are constrained to be positive than you can only increase the function and the minimal x is x=0 giving the same dual function value as the =0 case. Right?
@mohammadsarkhosh935
@mohammadsarkhosh935 2 жыл бұрын
in 25:11 why inf xT (w+v)x isn't gradian x which is equal to 2*xT(w+x) . Where did Lamda come from when you wrote Lamda min?
@localcarbuyer8704
@localcarbuyer8704 4 жыл бұрын
well described!!!
@AhmadBazzi
@AhmadBazzi 3 жыл бұрын
Glad you think so!
@jhonatancollazosramirez8513
@jhonatancollazosramirez8513 2 жыл бұрын
these exercise min 15:35 How would it be programmed in MatLab? thank you
@hbasu8779
@hbasu8779 2 жыл бұрын
why is g(\lambda,\mu) concave at @6.46?
@AK-yf4dp
@AK-yf4dp 3 жыл бұрын
Thank you Dr,Ahmad for an amazing course!! I have a question, in the Matlab example the value of x obtain by fminunc solver was 0.1574 which clearly is not optimal value (x star) , what is the reason of that?
@danmcgloven8169
@danmcgloven8169 3 жыл бұрын
From my experience, it's better to leave a timestamp on the exact minute so you get faster responses from Ahmad. Notice how he responds on timestamped questions.
@santhuathidi5987
@santhuathidi5987 4 жыл бұрын
hello sir.. in 34:49 how f0*(y)= sum(exp(yi-1)) from f0(x) pls explain
@aleocho774
@aleocho774 3 жыл бұрын
The conjugate function of sum(xi log xi) is sum(exp(yi-1))
@saumyachaturvedi9065
@saumyachaturvedi9065 2 жыл бұрын
Thanks, sir. Sir, how to proceed if we have more than one Lagrange multiplier, especially for Matlab. How to find the optimal value of Lagrange multipliers in that case?
@chrisliou2323
@chrisliou2323 4 жыл бұрын
Hi, at 8:39 and 8:45, did you mean "upper bounded by" rather than "lower bounded by" ?
@AhmadBazzi
@AhmadBazzi 4 жыл бұрын
yes you are right, Chris. Thanks for pointing this out !!
@ArmanAli-ww7ml
@ArmanAli-ww7ml 2 жыл бұрын
what happen if we open infimum term with lagrangian function inside?
@thomasjefferson6225
@thomasjefferson6225 Жыл бұрын
Anyone know why he converted x >= 0 to -x
@sepidehghasemi8631
@sepidehghasemi8631 2 жыл бұрын
thank you for amazing course, now I understand better😊 I need help about one example in Optimization course. Can you help me please?
@momofrozen4431
@momofrozen4431 5 жыл бұрын
Can you please explain the derivative of vT(Ax-b)=ATv?
@AhmadBazzi
@AhmadBazzi 5 жыл бұрын
Hey Momo, well the derivative of your expression is the same as the derivative of vTAx since vTb is independent of x. So now let’s deal with vTAx. Write down the expression as a summation then derive with respect to the kth entry of x, x_k. Finally stack all the derivatives one on top of the other.
@sepidehghasemi8631
@sepidehghasemi8631 2 жыл бұрын
If I join in your group, Can I send you my question about Optimization?
@imtryinghere1
@imtryinghere1 2 жыл бұрын
is it me or does he go through the matlab code really fast? I guess because I am only a python scrub it is a bit fast for me to follow the code...
@ehsaneshaghi3613
@ehsaneshaghi3613 3 жыл бұрын
There are too many distracting advertisements !
@danmcgloven8169
@danmcgloven8169 3 жыл бұрын
Why you hating man ? I think it's pretty okay if there's ads when ahmad dedicates time to get rewarded
Constrained Optimization: Intuition behind the Lagrangian
10:49
“Don’t stop the chances.”
00:44
ISSEI / いっせい
Рет қаралды 60 МЛН
Каха и дочка
00:28
К-Media
Рет қаралды 3,3 МЛН
Lecture 1 | Convex Optimization | Introduction by Dr. Ahmad Bazzi
48:18
Understanding Lagrange Multipliers Visually
13:18
Serpentine Integral
Рет қаралды 371 М.
The Lagrangian
12:28
Khan Academy
Рет қаралды 494 М.
Lagrange Multipliers | Geometric Meaning & Full Example
12:24
Dr. Trefor Bazett
Рет қаралды 323 М.
Convex Optimization Basics
21:32
Intelligent Systems Lab
Рет қаралды 36 М.
What Is Mathematical Optimization?
11:35
Visually Explained
Рет қаралды 138 М.