Why we can't take "dt" to 0 in a computer: Sources of error in numerical differentiation

  Рет қаралды 11,353

Steve Brunton

Steve Brunton

Күн бұрын

Пікірлер: 32
@metluplast
@metluplast 2 жыл бұрын
Thanks Steve. You are the best in this field.
@demr04
@demr04 Жыл бұрын
Very important topic. With gradient descent, when the step a choose is too small, the calculations didn't converge and I didn't understand why. Thanks Steve.
@pipertripp
@pipertripp Жыл бұрын
Great presentation! I loved the error analysis that you did in the last half of the lecture. That was really fabulous!
@jacquev6
@jacquev6 2 жыл бұрын
Your videos are fantastic. Thank you so much for taking the time to make them and share them!
@4AneR
@4AneR 2 жыл бұрын
As slight remark as I see it. The roundoff error error of exactly 10^(-16) would be true for "fixed" point numbers, yet most computers compute with "floating" point numbers, and here it would be more correct to say a precision of "at most 16 digits", because with floating point numbers you can easily store say 1.2345678 * 10^(-100), and here the precision for 1.2345678 is limited by 16 digits, while the number itself can be very low (in my example of order of -100). An interesting part is why it doesn't solve the problems stated in this video with truncation errors - because when performing calculations between floating point numbers that are different, the order of the smaller number will be scaled up to the larger order, but at the same time a number 1.2345678 would turn to e.g. 0.00000000000000000012345678, and after truncation to 16 digits only a 0 will be left, thus e.g. 10^-50 + 10^-66 == 10^-50. Now even if we reduce t -> 0, the values of the function f remain in the same order, and at some point dt will be just 0, and lead to undefined derivatives
@byronwatkins2565
@byronwatkins2565 2 жыл бұрын
Float is 4 bytes = 32b, double is 8 bytes = 64b; but most software now uses extended precision 80b. You quoted the limitation of this 80b extended precision.
@MrHaggyy
@MrHaggyy 2 жыл бұрын
The IEEE 754 paper describes all of the binary representations. A recent 2008 paper is valid for basically all existing hardware (controller, CPU, GPU ...) the newer 2019 paper covers mixed precision as well. This will be used in hardware accelerators like the IBM-AIU for PC/server or DSP's on ARM/RISC.
@stevegilbert3067
@stevegilbert3067 2 жыл бұрын
Cleve Moler "Complex step function" approach seems useful, but has the disadvantage that it requires that the function being evaluated can take a complex value as its argument, and can produce a complex value result.
@hoseinzahedifar1562
@hoseinzahedifar1562 2 жыл бұрын
Very thanks... Homework: In time 5:46: you can take some numbers, such as sqrt(2) and (sqrt(2)+(10^-16)/2) (in Matlab) or sqrt(2) and (sqrt(2)+(10**-16)/2) (in python), type them in the command window, and convenience yourself that these numbers have the same decimal representation. P.S.: 1- In Matlab, you can use the "format long" command to display the long format of a decimal number.
@gdefombelle
@gdefombelle Жыл бұрын
Double precision typically uses 64 bits to represent a number. Associated error is 10^-308
@qchentj
@qchentj 7 ай бұрын
This is another enlightening session! Not sure if the conclusion of 1e-5 as the best trade off time step for higher order derivative calculation approach. We have been using 1e-8 for some reason, maybe that is an illusion?
@bouazabachir4286
@bouazabachir4286 10 ай бұрын
Thanks a lot professor for your lectures, l am from Algeria.
@lioneloddo
@lioneloddo 2 жыл бұрын
Mathematics are divided into two parts: The computational and the theoretical world. And then we can find an analogy with Aristote and his sublunar and supralunar world. The first concerning all that is situated under the orbit of the Moon (the Earth and its atmosphere), is a symbol of uncertainty, continually altered and unstable (as computation). The second, on the other hand, is immutable, perfect, stable and eternal (as a beautiful mathematical theory).
@sharrehabibi
@sharrehabibi 6 ай бұрын
Very nice and informative video!
@DubioserKerl
@DubioserKerl Жыл бұрын
In your calculation of |Error| - should m not be a variable depending on delta t instead of being a constant, because you explained it as "the max f''' over the interval t - delta t to t + delta t" - which is an interval that narrows with shrinking delta t (and therefore, m may shrink as well unless the maximum is exactly in the middle of the interval)?
@gazzamgazzam4371
@gazzamgazzam4371 2 жыл бұрын
Hello everyone. Can we change the precision of calculation in simulink (matlab)?
@chrisguiney
@chrisguiney 2 жыл бұрын
Just to save everyone some steps: Doubles are 8 bytes, single precision floats are 4 The standard is IEEE 754
@milos_radovanovic
@milos_radovanovic 2 жыл бұрын
Could you analyze in a video an error behavior of first order numerical differentiation techniques for analytic functions with the "Complex Step Differentiation" showcased on MathWorks Blogs?
@punkisinthedetails1470
@punkisinthedetails1470 2 жыл бұрын
Is that not a singularity in shorthand speak at that point
@individuoenigmatico1990
@individuoenigmatico1990 Жыл бұрын
M(Δt)=sup{d³f/dt³ (c)|c∈(t-Δt,t+Δt)} is not a constant but actually a function of Δt, so the graph of E(Δt)=Δt²M(Δt)/6 is not a parabola, even if it is something similar. It is true that M(Δt) is a monotonically non-decreasing function (for bigger Δt, the interval (t-Δt,t+Δt) is bigger and hence M(Δt) can only get bigger). Either d³f/dt³ is limited in which case M(Δt)->sup{d³f/dt³(x)|x∈R} or it is unlimited, in which case M(Δt)->+8. Hence for Δt->0, in both cases (wheter M(Δt) is limited or not) we have E(Δt)=Δt²M(Δt)/6 -> +8. It is also true that if d³f/dt³ is continuous then for Δt->0 we have M-> d³f/dt³ (t) and hence E(Δt)=Δt²M(Δt)/6 ->0. But the derivative of E with respect to Δt, dE/d(Δt), is a little bit more complicated. The fact that M is a function of implies that dE/d(Δt)= ΔtM(Δt)/3+Δt²M'(Δt)/6, and it is possible M'(Δt) doesn't always exist.
@asitkumar2095
@asitkumar2095 2 жыл бұрын
Does the same logic hold in numerical integration?
@byronwatkins2565
@byronwatkins2565 2 жыл бұрын
No. Integration is well-conditioned. Adjacent areas have the same sign so adding them keeps all of the precision. As dt gets smaller, f(t+dt) and f(t-dt) have many of their most significant digits in common. Subtracting them to approximate the derivative causes all of these digits to vanish and greatly reduces the relative precision of the result. Eventually, the 'signal' gets subtracted away and leaves only the 'noise' of the data.
@hamidrezaalavi3036
@hamidrezaalavi3036 2 жыл бұрын
If you measure f(t+dt) and f(t) for a process that does not change by a bit in dt time increment then the integral of df would be zero for that interval! In microprocessors with 8 bits, it is a real danger for slow process control. The remedy is to decrease the sampling rate of the process.
@byronwatkins2565
@byronwatkins2565 2 жыл бұрын
@@hamidrezaalavi3036 The derivative estimate would be zero, but the integral estimate would be zero only if both measurements were zero.
@hamidrezaalavi3036
@hamidrezaalavi3036 2 жыл бұрын
@@byronwatkins2565 In control systems s = s+ds, and ds = df*dt, if the sampling rate is high df might be zero and it makes a problem in the control system.
@hamidrezaalavi3036
@hamidrezaalavi3036 2 жыл бұрын
Just a hint for micro control designers.
@juliogodel
@juliogodel 2 жыл бұрын
This error graph resembles the bias/variance trade off diagram in machine learning...
@seabasschukwu6988
@seabasschukwu6988 26 күн бұрын
this is so cool man
@세렌디피티-j3d
@세렌디피티-j3d 2 жыл бұрын
but we can employ autograd
@danielvarga_p
@danielvarga_p 2 жыл бұрын
Thank you!
@javierespinoza7075
@javierespinoza7075 2 жыл бұрын
cool 😎
Numerical Integration: Discrete Riemann Integrals and Trapezoid Rule
29:43
Numerical Differentiation with Finite Difference Derivatives
36:57
Steve Brunton
Рет қаралды 52 М.
Andro, ELMAN, TONI, MONA - Зари (Official Audio)
2:53
RAAVA MUSIC
Рет қаралды 8 МЛН
-5+3은 뭔가요? 📚 #shorts
0:19
5 분 Tricks
Рет қаралды 13 МЛН
Gauss's Divergence Theorem
26:53
Steve Brunton
Рет қаралды 154 М.
7 Outside The Box Puzzles
12:16
MindYourDecisions
Рет қаралды 267 М.
The deeper meaning of matrix transpose
25:41
Mathemaniac
Рет қаралды 398 М.
The Mathematician So Strange the FBI Thought He Was a Spy
13:11
What does the second derivative actually do in math and physics?
15:19
What determines the size of an atom?
43:22
Physics Explained
Рет қаралды 107 М.
Complex Fibonacci Numbers?
20:08
Stand-up Maths
Рет қаралды 1 МЛН
Andro, ELMAN, TONI, MONA - Зари (Official Audio)
2:53
RAAVA MUSIC
Рет қаралды 8 МЛН