What comes next?
11:27
2 жыл бұрын
Course summary
16:35
2 жыл бұрын
Gradient projection algorithm example
10:59
Incomplete gradient projection
17:33
2 жыл бұрын
Variants of the KKT conditions
10:41
2 жыл бұрын
KKT conditions: some examples
15:23
2 жыл бұрын
Convex cones and Farkas' lemma
10:45
2 жыл бұрын
Fletcher-Reeves method
12:36
2 жыл бұрын
Conjugate gradient method
14:32
2 жыл бұрын
Пікірлер
@Rook_i_e
@Rook_i_e 3 ай бұрын
thanks
@abrahamtamru2917
@abrahamtamru2917 3 ай бұрын
Thank you for this! Where can one find the book(lectures) you use for the explanation?
@pnachtwey
@pnachtwey 3 ай бұрын
Dividing the learning rate by two works better. However I divide the learning rate by two until it is very small and will then take the step anyway. If I make a successful step then I multiply the learning rate by 4 not to exceed the original learning rate. This way the learning rate adapts to the terrain which rarely like a bowl in reality.
@mulenajesse4991
@mulenajesse4991 5 ай бұрын
Thank you. where can i get more video on algebra cones
@luisdanieltorresgonzalez7082
@luisdanieltorresgonzalez7082 7 ай бұрын
x es semipositive in the convex cone section
@sritejapashya1729
@sritejapashya1729 7 ай бұрын
Great explanation! Using a 2D explanation makes it so much easier to visualise and understand! Thanks!
@armandobond7736
@armandobond7736 8 ай бұрын
Thank you so much, I now have a clear geometric intuition in my head!
@mrweisu
@mrweisu Жыл бұрын
Nice explanation!
@thomasjefferson6225
@thomasjefferson6225 Жыл бұрын
God i hate this type of math the most. Why must I learn this why lord why.
@rezamadoliat2074
@rezamadoliat2074 Жыл бұрын
Many thanks for your clear explanation. Just a minor correction: Almost at the end of the lecture, it should be noted that the column space of the transpose of matrix M is orthogonal to the null space of matrix M.
@神奇海螺-x7s
@神奇海螺-x7s 2 ай бұрын
7:34
@iusedwasi2990
@iusedwasi2990 Жыл бұрын
i was loong for dichotomous search for c language but this shit is good too
@tuongnguyen9391
@tuongnguyen9391 Жыл бұрын
Oh thank you from Vietnam !
@mikewang4626
@mikewang4626 Жыл бұрын
Thanks for your video. I think this series is being underestimated since you may quickly understand what the algorithm does without any prior knowledge like Krylov space.
@AJ-et3vf
@AJ-et3vf Жыл бұрын
Great video. Thank you
@vishwapriyagautam3336
@vishwapriyagautam3336 Жыл бұрын
I have a doubt. It is mentioned in the video that for 2 constraints, there are four things to check. I understood that we are looking for the solution on the boundaries of the contraint set where at least one of the constraints are active. But the optimum solution can be a point in the interior of the constraint set where no constraints are active. How will I get the internal optimum point with unconstraint optimization technique?
@saint79209
@saint79209 Жыл бұрын
It's just the condition of stationary, simple derivative (Gradient ) of the function with constraints.
@marwa.f7020
@marwa.f7020 2 жыл бұрын
Its been months searching for it please send me a matlab code for this
@marwa.f7020
@marwa.f7020 2 жыл бұрын
I need a matlab code of that please
@piyushkumar-wg8cv
@piyushkumar-wg8cv 2 жыл бұрын
How much iterations it will take, I mean is there fixed number of iteration like conjugate gradient.
@Darkdivh
@Darkdivh 2 жыл бұрын
Hello dear prof. Mitchell. I went through this lecture and noticed that constraint minimiser x* = (1, 2) is the solution closest to the unconstraint minimiser x*_uncontraint =(0,5). Is that a coincidence or is there some rule of thumb in this?
@ryanlongmuir8286
@ryanlongmuir8286 2 жыл бұрын
Awesome video. Thank you!
@CrumpledUnderfoot
@CrumpledUnderfoot 2 жыл бұрын
Cool lecture! May I ask what's the reference text are you using? Thanks and keep it up
@henrrymolina976
@henrrymolina976 2 жыл бұрын
Me puedes compartir la presentación???
@НаджихахНассер
@НаджихахНассер 2 жыл бұрын
hi, may i know what 'k mod n' means and n stands for what?
@kiston8630
@kiston8630 6 ай бұрын
bit late, but k mod n is the remainder left when n divides k. In this algorithm, the consequence of this is that if the algorithm has not converged in n steps (as it ideally would), the new search direction is taken to be the direction of steepest descent.
@alexf2008
@alexf2008 2 жыл бұрын
Superb, thanks for the clarity
@yfhenkes7179
@yfhenkes7179 2 жыл бұрын
Thanks!!
@jasonpham1426
@jasonpham1426 2 жыл бұрын
Could you explain why y is in R(M^T) I thought the R(M^T) is the same as N(M) = S
@rezamadoliat2074
@rezamadoliat2074 Жыл бұрын
This is the definition of a column space: y is in the column space of M transpose, which is orthogonal to the null space of M.