I really like how the video makes the connection between linear approximations and differentiability, and how it ties it into the usual, seemingly unrelated idea of the limit of slopes of secant lines. This is because building linear approximations into the definition makes the definition generalizable to abstract topological vector spaces quite easily, and in particular, it makes the concept of the Jacobian not feel arbitrary when moving on to study differebtiability on R^n instead. This is a direct advantage that the definition in terms of the limit of slopes of secant lines simply does not have. Also, this ties more directly into Taylor's theorem. Here is something I want to elaborate on, and this will make it a bit more clear how exactly this relates to Taylor's theorem, which I assume will be covered in this series way down the line. So readers can consider this a bit of foreshadowing, though I will not spoil anything, but even if the readers have no idea as to what Taylor's theorem is in the first place, it may still be interesting enough to be worth pointing out. In the series, continuity of functions was discussed, and one complete and unique characterization of topological continuity can be done via the ε-δ criterion: for every real ε > 0, there exists a real δ > 0, such that for every x in I, |x - x0| < δ implies |f(x) - f(x0)| < ε. Differentiability also has an associated ε-δ criterion that completely characterizes it uniquely. This criterion can be proven by starting with lim [f(x) - f(x0)]/(x - x0) (x -> x0) = f'(x0). In many online resources, you will see this written instead as lim [f(x) - f(x0) - f'(x0)·(x - x0)]/(x - x0) (x -> x0) = 0, which is equivalent to saying, for every real ε > 0, there exists a real δ > 0, such that for every x in I, |x - x0| < δ implies |{f(x) - [f(x0) + f'(x0)·(x - x0)]}|/|x - x0| < ε, but |{f(x) - [f(x0) + f'(x0)·(x - x0)]}|/|x - x0| < ε is equivalent to |{f(x) - [f(x0) + f'(x0)·(x - x0)]}| < ε·|x - x0|. Compare |f(x) - f(x0)| < ε with |{f(x) - [f(x0) + f'(x0)·(x - x0)]}| < ε·|x - x0|, the former being the consequent in the criterion for continuity, and the latter being the consequent in the criterion for differentiability. Notice the dichotomy. In the former, the approximation given is the output f(x0) of a constant function, but in the latter, the approximation given is just the linearization of f at x0 as discussed in this video, which is the output of a linear function. In both cases, the corresponding approximations are, in some sense, the best constant and linear approximations, respectively. Also, in the former, the upper bound in the error is given by ε = ε·(x - x0)^0, with the exponent 0 corresponding to the approximation being a constant, and in the latter, the same upper bound is instead ε·(x - x0) = ε·(x - x0)^1, with the exponent 1 corresponding to the approximation being linear. Based on this pattern, it is fair to ask, can we say that there is a "best" quadratic approximation that is defined by the criterion having a consequent |f(x) - (Qf)(x0)| < ε·(x - x0)^2 (Q here stands for "Quadratic approximation")? And what does this criterion imply about f in relation to differentiability at x0? What about if instead, we consider |f(x0) - ([A(n)][f])(x0)| < ε·(x - x0)^n (here, A(n) stands for "nth order approximation")?
@myrickcrampton66643 жыл бұрын
Beautiful, I’ve only seen the affine approach in the abstract context. Every day the first thing I do is look for a new video by you!
@brightsideofmaths3 жыл бұрын
Glad it was helpful!
@mr.arikodus7527 Жыл бұрын
Thank you so much for all the effort you're putting into your videos, I can not describe how helpful they are.
@Independent_Man33 жыл бұрын
Is the Delta function (subscript f and x0) related to the actual derivative? If yes, how are those two related? In particular, the last equation you wrote in 9:25 reminds of Taylor's theorem where you expand a differentiable function to first order and the derivative is evaluated at some point x tilde, where x0 < xtilde < x .
@brightsideofmaths3 жыл бұрын
I talk about that in more detail in the next video!
@LolzupXD2 жыл бұрын
At the end you state the function Δ has to be continuous at x₀ for f to be differentiable. But isn't Δ continuous at x₀ if x₀ were an isolated point? Doesn't that conclude that isolated points are differentiable?
@brightsideofmaths2 жыл бұрын
We didn't define differentiability for isolated points because there is not limit to make sense.
@rajinfootonchuriquen Жыл бұрын
In the videos where he talk about convergence explain that single point doesn't have limits and that goes for the definition of convergence of sequence because you need an epsilon-neighborhood, but single points doesn't have it.
@ahmedamr526511 ай бұрын
Great video, thank you very much! :)
@sinanakhostin66042 жыл бұрын
Thanks a lot for this very interesting lesson. I wonder why did you suddenly chose to use variable "t" in definition of secant and later tangent ?
@brightsideofmaths2 жыл бұрын
I just wanted to avoid conflicts with chosen points x_0 and x there. Sometimes, I recognize such possibilities only later on and then notations have to be changed.
@mathsandsciencechannel3 жыл бұрын
great video
@brightsideofmaths3 жыл бұрын
Glad you enjoyed it
@chykrkr2 жыл бұрын
del f, x0 is not defined at x0, I think… Is it possible the function can be continuous at x0?
@brightsideofmaths2 жыл бұрын
What do you mean?
@opokufrederick6074Ай бұрын
I have mphil in mathematics but I think I'm now learning mathematics
@brightsideofmathsАй бұрын
Nice :)
@amkod.31110 ай бұрын
at 5:22 you write t instead of x but still use x in f(x) - f(x0) / x - x0 did you mean t?
@brightsideofmaths10 ай бұрын
No, t is only the variable used after the expression f(x) - f(x0) / x - x0