Thank you for this! Where can one find the book(lectures) you use for the explanation?
@pnachtwey3 ай бұрын
Dividing the learning rate by two works better. However I divide the learning rate by two until it is very small and will then take the step anyway. If I make a successful step then I multiply the learning rate by 4 not to exceed the original learning rate. This way the learning rate adapts to the terrain which rarely like a bowl in reality.
@mulenajesse49915 ай бұрын
Thank you. where can i get more video on algebra cones
@luisdanieltorresgonzalez70827 ай бұрын
x es semipositive in the convex cone section
@sritejapashya17297 ай бұрын
Great explanation! Using a 2D explanation makes it so much easier to visualise and understand! Thanks!
@armandobond77368 ай бұрын
Thank you so much, I now have a clear geometric intuition in my head!
@mrweisu Жыл бұрын
Nice explanation!
@thomasjefferson6225 Жыл бұрын
God i hate this type of math the most. Why must I learn this why lord why.
@rezamadoliat2074 Жыл бұрын
Many thanks for your clear explanation. Just a minor correction: Almost at the end of the lecture, it should be noted that the column space of the transpose of matrix M is orthogonal to the null space of matrix M.
@神奇海螺-x7s2 ай бұрын
7:34
@iusedwasi2990 Жыл бұрын
i was loong for dichotomous search for c language but this shit is good too
@tuongnguyen9391 Жыл бұрын
Oh thank you from Vietnam !
@mikewang4626 Жыл бұрын
Thanks for your video. I think this series is being underestimated since you may quickly understand what the algorithm does without any prior knowledge like Krylov space.
@AJ-et3vf Жыл бұрын
Great video. Thank you
@vishwapriyagautam3336 Жыл бұрын
I have a doubt. It is mentioned in the video that for 2 constraints, there are four things to check. I understood that we are looking for the solution on the boundaries of the contraint set where at least one of the constraints are active. But the optimum solution can be a point in the interior of the constraint set where no constraints are active. How will I get the internal optimum point with unconstraint optimization technique?
@saint79209 Жыл бұрын
It's just the condition of stationary, simple derivative (Gradient ) of the function with constraints.
@marwa.f70202 жыл бұрын
Its been months searching for it please send me a matlab code for this
@marwa.f70202 жыл бұрын
I need a matlab code of that please
@piyushkumar-wg8cv2 жыл бұрын
How much iterations it will take, I mean is there fixed number of iteration like conjugate gradient.
@Darkdivh2 жыл бұрын
Hello dear prof. Mitchell. I went through this lecture and noticed that constraint minimiser x* = (1, 2) is the solution closest to the unconstraint minimiser x*_uncontraint =(0,5). Is that a coincidence or is there some rule of thumb in this?
@ryanlongmuir82862 жыл бұрын
Awesome video. Thank you!
@CrumpledUnderfoot2 жыл бұрын
Cool lecture! May I ask what's the reference text are you using? Thanks and keep it up
@henrrymolina9762 жыл бұрын
Me puedes compartir la presentación???
@НаджихахНассер2 жыл бұрын
hi, may i know what 'k mod n' means and n stands for what?
@kiston86306 ай бұрын
bit late, but k mod n is the remainder left when n divides k. In this algorithm, the consequence of this is that if the algorithm has not converged in n steps (as it ideally would), the new search direction is taken to be the direction of steepest descent.
@alexf20082 жыл бұрын
Superb, thanks for the clarity
@yfhenkes71792 жыл бұрын
Thanks!!
@jasonpham14262 жыл бұрын
Could you explain why y is in R(M^T) I thought the R(M^T) is the same as N(M) = S
@rezamadoliat2074 Жыл бұрын
This is the definition of a column space: y is in the column space of M transpose, which is orthogonal to the null space of M.