Thank you so much for these videos, I'm taking computer engineering in college and this helped.
@chandramoulikamanchi85205 жыл бұрын
I am trying to understand root finding algorithms. This channel helped a lot. Thanks for the content
@leesweets4110 Жыл бұрын
I came across an article recently... a white paper titled "Formulations to overcome the divergence of iterative method of fixed-point in nonlinear equations solution". By Wilson Rodríguez Calderón and Myriam Rocío Pallares-Muñoz, originally written in spanish. It talked about two different ways, distinct from Wegsteins method, to help force/accelerate convergence. Was hoping you could cover them some time. One way was "rotated axes", where the plane was rotated (maybe at each iteration?) such that the variable was forced to approached its convergent. I understood the concept but not the implementation. The other method was called "non-orthogonal linesearch", which I didnt quite understand.
@OscarVeliz Жыл бұрын
Seems like it is worth looking into. I do have linesearch already in the queue.
@leesweets4110 Жыл бұрын
@@OscarVeliz The rotated axes one is another to add to your to-do list! HA. I know you have linesearch already, but is it the same as non-orthogonal line search, as named and described in the article? I understood your video on linesearch, and like I said I didnt understand the portion of the article that talked about orthogonal linesearch, so it might be the same thing. Just like some clarity on that, firstly, and if it is different could you touch on that one too?
@OscarVeliz Жыл бұрын
It is probably different. I haven't invstigated the article enough to say for certain it would seem like the non-orthogonality was a key component.
@mphomajestic88032 жыл бұрын
your explanation is so quick, never had time to process everything and understand
@leesweets4110 Жыл бұрын
I noticed that q is always a value s.t. |q|
@OscarVeliz Жыл бұрын
Could be worth investigating. A very large q seems like a bad idea so smaller q's are usually just safer (if slower) choice.
@yorailevi67476 ай бұрын
Most results on google for this method link to this video… is it not well known or not used much?
@OscarVeliz6 ай бұрын
The method is not well known which leads to not being used much. Seems like every numerical methods textbook covers fixed point iteration, but then stops there without going over Wegstein or Steffensen.
@tahakhabouss6558 Жыл бұрын
will Wegstein's Method work for any given function or are there some limitations?
@OscarVeliz Жыл бұрын
As far as I know, the only limitation is that g(x) could divide by zero or be discontinuous or something like that -- then the method wouldn't work. Sorry for taking so long to respond.
@tahakhabouss6558 Жыл бұрын
@@OscarVeliz thank you
@mayurkakade1202 Жыл бұрын
0.00002306X^3 −0.00798X ^2 +0.5119X−11.14=0: I want to solve this equation in MATLAB using wegstein method...but i am not getting correct answers ...can u help
@omarkhalifa46213 жыл бұрын
What if I have a black box (executable file) that I would like to include as an equation? I tried solving a system of non-linear equations using Newton's Methods and I used finite differences estimate the "derivatives" of the black box, but I still seem not to reach a solution :(
@OscarVeliz3 жыл бұрын
When you don't have the exact Jacobian there are several derivative-free alternatives to solving non-linear systems, including Generalized Finite Difference (which I plan on making a lesson for). If that method isn't working for you, and you only have access to F(X), I'd recommend looking at Generalized Steffensen's Method kzbin.info/www/bejne/pJXJY6mgjc-LsLM and Generalized Secant Method kzbin.info/www/bejne/pmOygZ-kfa-DhKs both of which do not require a known Jacobian.
@omarkhalifa46213 жыл бұрын
@@OscarVeliz Perfect! I will take a look at both of these methods. Thanks for sharing!
@alexandrevachon5413 жыл бұрын
Can Wegstein's method be generalized for multiple variables? Based on the formula, I think that's the case.
@OscarVeliz3 жыл бұрын
It can. That is actually the main contribution of Gutzler's thesis "An Iterative Method of Wegstein for Solving Simultaneous Nonlinear Equations".
@nenadilic94863 жыл бұрын
@@OscarVeliz Are you willing to make a video on that? I thank you very much for the whole series but especially for this one, since it is applicable to any cont. function and not just to polynomials and it is strikingly simple to implement in an algorithm and the method has all possible advantages over other methods you presented when dealing with real roots. So I imagine that using it for solving systems of nonlinear equations also has these advantages.
@OscarVeliz3 жыл бұрын
It is something I can cover although I don't exactly have a good way to explain it. Would require more thought. In the meanwhile Generalized Aitken-Steffensen exhibits the same behavior as Wegstein of always converging, which it strangely doesn't do for a single function.
@nenadilic94863 жыл бұрын
@@OscarVeliz Thank you for that. On the side, I just tried Wegstein's with updates, and I was disappointed. It strangely oscillates: changing in some iterations 4th or 5th significant decimal and then returning to old 7th or 8th, and it diverges in the same examples in which I found that Newton method diverges. With Newton method I managed to get 14th significant decimal in 4-6 iterations in most of the cases, but it diverged in some. The problem is finding the melting temperature (Tm) of a nucleic acid hybrid from the basic equation: Tm = ( H + Cp*(Tm - Tref) )/ ( S + Cp* ln(Tm/Tref) ), where Tm is x. For Newton method, the function F(x) (= 0) is Tm (left side) multiplied by the denominator minus the numerator, and the F'(x) actually equals the denominator. Perhaps, for the fixed-point g(Tm), the form I used was ill-conditioned, and it is this one: g(x) = a * exp(b/x), where b = H/Cp - Tref, and a = exp(1 + lnTref - S / Cp) S is on the order of (-100) to *(-500) H is on the order of (-30,000) to (-200,000) Cp is on the order of (-300) to (-1,000). and starting Tm is usually between 300 and 350. Tref is 326. Starting guess for Tm equals H/S. I had to make the algorithm to stop after 60 iterations because all cases oscillate indefinitely. Some diverge. This also happened with some cases using Newton method but there I used only 7 iterations as the max. number, and as I said, usually it found the root before that (oscillations were around 13th decimal, not so in Wegstein's). The same cases diverge. These are usually the cases with small values of S, H and Cp.
@nenadilic94863 жыл бұрын
P.S. The divergent cases are probably due to too high initial guess, but for oscillations, I now recall that the 'math' package of Standard Go library often makes rounding errors at 4th or 5th decimal when I don't expect numerical instability (like simple multiplication and division and in conversion between float32 and float64, although in this algorithm I only used float64 numbers).
@DrFZ-to4bo2 жыл бұрын
My understanding of Wegstein's Method is this method is the same as the secant method. Secant First compute two point (x_n1, g_n1) and (x_n, g_n), here n1 represensts (n-1). The secant equation of these points is y = (x-x_n)*(g_n-g_n1)/(x_n-x_n1) + g_n1 The point of intersection of y = x and secant line is next approximation of x, that is x_(n+1) = (x_n1*g_n-x_n*g_n1)/(g_n-g_n1-x_n+x_n1) Wegstein's Method If we see the algorithm of Wegstein's Method is x_(n+1) = q*x_n + (1-q)*g_n where, q = a/(a-1) and a = (g_n-g_n1)/(x_n-x_n1) by putting the value of "a" in "q", we get q = (g_n-g_n1)/(g_n-g_n1-x_n+x_n1) now put "q" in x_(n+1) and simplify x_(n+1) = (x_n1*g_n-x_n*g_n1)/(g_n-g_n1-x_n+x_n1)
@OscarVeliz2 жыл бұрын
Saying Wegstein's Method is the same as Secant Method because you can derive one from the other is ill-advised. The Methods are not the same. If they were, then you would be able to plug the same starting values into both and always get the same sequence of x's. They would also have the same convergence order. As mentioned in the video, Gutzler was able to derive both versions of Steffensen's Method and Secant Method from Wegstein's Method, but nobody would try to argue that Steffensen's Method and Wegstein's Method are the same. Just as applying Aitken's Δ² Method to Fixed Point Iteration results in a convergence accelerated version, applying Wegstein's Method to Fixed Point Iteration results in convergence acceleration (and even induced convergence). How q is selected, whether it is fixed or updating (or even how you derive the updating q), is it still its own Method.
@DrFZ-to4bo2 жыл бұрын
@@OscarVeliz Hi Oscar, thanks for the reply, I have tried "Secant Method" and "Wegstein's Method with updating q", for some examples and getting the same sequences of x's. Can you show me an example in which the same starting values gives the different sequence of x's for these methods?
@DrFZ-to4bo2 жыл бұрын
I mean fixed point iteration for x_(n+1)=g(x_n) have the same results with "Wegstein's Method" and with "Secant method". The secant equation that passes through two initial points (x_0,g_0) and (x_1,g_1), gives the next approximation when it cuts "y=x". Note: I am not saying the Secant method of f(x)=0
@OscarVeliz2 жыл бұрын
I think you've realized the issue. Wegstein solved for the updating q using Geometry. Could you solve for the updating q another way, sure? But calling it Secant Method because a derivation of the updating q could use a secant equation (instead of geometry) is ludirous, especially when Secant Method is already its own Method. How q is selected, whether it is fixed or updating (or even how you derive the updating q), Wegstein's Method is not the same as Secant Method.
@DrFZ-to4bo2 жыл бұрын
@@OscarVeliz Thank you so much
@egecakrsoy32254 жыл бұрын
Where is Wegstein's Method for Matlab Code?
@OscarVeliz4 жыл бұрын
The repo link is in the video description. All of my root finding programs can be found in the src > rootfinding directory. Only a few of these programs are in MATLAB (technically GNU Octave). I have links to where you can run most of these programs online, without installing software, in the repository wiki; along with lots of documentation.